+ /* fallthrough */
+ default:
+ tcg_out_mov(s, size == MO_64, l->datalo_reg, TCG_REG_A0);
+ break;
Here in tcg_out_qemu_ld_slow_path, "size == MO_64" is "type".
+ /* TLB Hit - translate address using addend. */
+ if (TCG_TARGET_REG_BITS > TARGET_LONG_BITS) {
+ tcg_out_ext32u(s, TCG_REG_TMP0, addrl);
+ addrl = TCG_REG_TMP0;
+ }
+ tcg_out_opc_add_d(s, TCG_REG_TMP0, TCG_REG_TMP2, addrl);
Note for future optimization: Unlike RISC-V, LoongArch has indexed addressing, and we
should make use of it to eliminate this final add ...
+ if (USE_GUEST_BASE) {
+ tcg_out_opc_add_d(s, base, TCG_GUEST_BASE_REG, addr_regl);
... as well as these adds.
Compare tcg/ppc/ or tcg/sparc/, both of which always use indexed addressing (and indeed,
their reverse-endian memory ops do not have an offset form):
tcg_out_ldst_rr(s, data, addr,
(guest_base ? TCG_GUEST_BASE_REG : TCG_REG_G0),
qemu_ld_opc[memop & (MO_BSWAP | MO_SSIZE)]);
(It is not useful to compare tcg/aarch64/, which cannot represent "zero" with indexed
addressing, and so has to swap between offset and indexed addressing, depending on the
size of the guest address space and guest_base == 0.)
Oops, I've just noticed that the !CONFIG_SOFTMMU case is not zero-extending the guest
address for TARGET_LONG_BITS == 32.
r~