If the IOMMU granularity is smaller than TARGET_PAGE size, there may have multiple entries in the same page. Pass the origin address to IOMMU to get correct result
Similar to the RISCV PMP solution, TLB_INVALID_MASK will be set when there have multiple entries in the same page to check the IOMMU on every access. Signed-off-by: Ethan Chen <etha...@andestech.com> --- accel/tcg/cputlb.c | 17 ++++++++++++++++- system/physmem.c | 4 ++++ 2 files changed, 20 insertions(+), 1 deletion(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 117b516739..9c0db4d9e2 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1062,8 +1062,23 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, prot = full->prot; asidx = cpu_asidx_from_attrs(cpu, full->attrs); - section = address_space_translate_for_iotlb(cpu, asidx, paddr_page, + section = address_space_translate_for_iotlb(cpu, asidx, full->phys_addr, &xlat, &sz, full->attrs, &prot); + /* Update page size */ + full->lg_page_size = ctz64(sz); + if (full->lg_page_size > TARGET_PAGE_BITS) { + full->lg_page_size = TARGET_PAGE_BITS; + } else { + sz = TARGET_PAGE_SIZE; + } + + is_ram = memory_region_is_ram(section->mr); + is_romd = memory_region_is_romd(section->mr); + /* If the translated mr is ram/rom, make xlat align the TARGET_PAGE */ + if (is_ram || is_romd) { + xlat &= TARGET_PAGE_MASK; + } + assert(sz >= TARGET_PAGE_SIZE); tlb_debug("vaddr=%016" VADDR_PRIx " paddr=0x" HWADDR_FMT_plx diff --git a/system/physmem.c b/system/physmem.c index b7847db1a2..0fd0326714 100644 --- a/system/physmem.c +++ b/system/physmem.c @@ -702,6 +702,10 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr, iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, iommu_idx); addr = ((iotlb.translated_addr & ~iotlb.addr_mask) | (addr & iotlb.addr_mask)); + /* Update size */ + if (iotlb.addr_mask != -1 && *plen > iotlb.addr_mask + 1) { + *plen = iotlb.addr_mask + 1; + } /* Update the caller's prot bits to remove permissions the IOMMU * is giving us a failure response for. If we get down to no * permissions left at all we can give up now. -- 2.34.1