On 4/29/25 09:03, ~percival_foss wrote:
From: Percival Foss <f...@percivaleng.com>

The bug being resolved is that the current code in mmu_lookup() assumes
a valid 64-bit address space. If a guest has a 32-bit address space, a
page translation that crosses beyond the last page in the address space
will overflow out of the allocated guest virtual memory space in the
QEMU application and cause it to crash. In this case the first page will
be the last of the 32-bit address space (for example 0xFFFFF000 for 4K
page sizes) and the second page will overflow to a page beyond the
32-bit address space (0x100000000 in the very same example). An invalid
translation will be added to the cpu translation table from the second
page. Thought the translation will be for page address 0x100000000,
checks in other parts of the codebase actually enforce using only 32
bits, and will match this translation. Part of the stored translation is
the effective address, and another part is the addend to be used to
offset into the QEMU process's virtual memory space. The addend will
incorporate the 0x100000000 and offset into likely invalid virtual
address space.

The fix in the diff checks if the target is 32 bits and wraps the second
page address to the beginning of the memory space.

Signed off by: Percival Engineering <f...@percivaleng.com>
---
  accel/tcg/cputlb.c | 7 +++++++
  1 file changed, 7 insertions(+)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index fb22048876..457b3f8ec7 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1767,6 +1767,13 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, 
MemOpIdx oi,
          l->page[1].size = l->page[0].size - size0;
          l->page[0].size = size0;
+ /* check for wrapping address space on page crossing if target is 32 bit */
+        #if TARGET_LONG_BITS == 32
+        if (l->page[1].addr >= (1UL << TARGET_LONG_BITS)) {
+            l->page[1].addr %= (1UL << TARGET_LONG_BITS);
+        }
+        # endif

I agree something needs doing, but this isn't it.

This needs some sort of per-mmu mask, set when the cpu changes modes. For instance, you test ppc32, but the same thing can happen with ppc64 when MSR[SF] is clear.

There are a fair number of other targets which can see the same issue, some more complicated than this. For instance, s390x in 24-bit mode or RISC-V with the pointer masking extension.

This needs some sort of extension to CPUTLBDesc, and a new variation on tlb_flush_by_mmuidx in order for the target to clear the tlb while changing the mask whenever the target cpu changes modes.


r~

Reply via email to