On 6/24/22 10:16, Leandro Lupori wrote:
Check if each page dir/table base address is properly aligned and
log a guest error if not, as real hardware behave incorrectly in
this case.
Signed-off-by: Leandro Lupori <leandro.lup...@eldorado.org.br>
---
target/ppc/mmu-radix64.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index 339cf5b4d8..1e7d932893 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -280,6 +280,14 @@ static int ppc_radix64_next_level(AddressSpace *as, vaddr
eaddr,
*psize -= *nls;
if (!(pde & R_PTE_LEAF)) { /* Prepare for next iteration */
*nls = pde & R_PDE_NLS;
+
+ if ((pde & R_PDE_NLB) & MAKE_64BIT_MASK(0, *nls + 3)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "%s: misaligned page dir/table base: 0x%"VADDR_PRIx
+ " page dir size: 0x"TARGET_FMT_lx"\n",
+ __func__, (pde & R_PDE_NLB), BIT(*nls + 3));
+ }
+
index = eaddr >> (*psize - *nls); /* Shift */
index &= ((1UL << *nls) - 1); /* Mask */
*pte_addr = (pde & R_PDE_NLB) + (index * sizeof(pde));
In your response to my question on v1, you said that it appears that the cpu ignores bits
*nls+3. This isn't ignoring them -- it's including [nls+2 : nls] into pte_addr.
It would be better to compute this as
index = ...
index &= ...
*pte_addr = ...
if (*pte_addr & 7) {
qemu_log(...);
}
r~