When running on 32 bit TCG backends a wide unaligned load ends up truncating data before returning to the guest. We specifically have the return type as uint64_t to avoid any premature truncation so we should use the same for the interim types.
Fixes: https://bugs.launchpad.net/qemu/+bug/1830872 Fixes: eed5664238e Signed-off-by: Alex Bennée <alex.ben...@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <phi...@redhat.com> Reviewed-by: Richard Henderson <richard.hender...@linaro.org> Tested-by: Laszlo Ersek <ler...@redhat.com> Tested-by: Igor Mammedov <imamm...@redhat.com> --- accel/tcg/cputlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index cdcc377102..b796ab1cbe 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1303,7 +1303,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 >= TARGET_PAGE_SIZE)) { target_ulong addr1, addr2; - tcg_target_ulong r1, r2; + uint64_t r1, r2; unsigned shift; do_unaligned_access: addr1 = addr & ~(size - 1); -- 2.20.1