Hi guys, I provide more information, please see below
> -----Original Message----- > From: Lu Baolu [mailto:baolu...@linux.intel.com] > Sent: Thursday, March 18, 2021 10:59 AM > To: Alex Williamson <alex.william...@redhat.com> > Cc: baolu...@linux.intel.com; Longpeng (Mike, Cloud Infrastructure Service > Product > Dept.) <longpe...@huawei.com>; dw...@infradead.org; j...@8bytes.org; > w...@kernel.org; io...@lists.linux-foundation.org; LKML > <linux-kernel@vger.kernel.org>; Gonglei (Arei) <arei.gong...@huawei.com>; > chenjiashang <chenjiash...@huawei.com> > Subject: Re: A problem of Intel IOMMU hardware ? > > Hi Alex, > > On 3/17/21 11:18 PM, Alex Williamson wrote: > >>> {MAP, 0x0, 0xc0000000}, --------------------------------- (b) > >>> use GDB to pause at here, and then DMA read > >>> IOVA=0, > >> IOVA 0 seems to be a special one. Have you verified with other > >> addresses than IOVA 0? > > It is??? That would be a problem. > > > > No problem from hardware point of view as far as I can see. Just thought about > software might handle it specially. > We simplify the reproducer, use the following map/unmap sequences can also reproduce the problem. 1. use 2M hugetlbfs to mmap 4G memory 2. run the while loop: While (1) { DMA MAP (0, 0xa0000) - - - - - - - - - - - - - -(a) DMA UNMAP (0, 0xa0000) - - - - - - - - - - - (b) Operation-1 : dump DMAR table DMA MAP (0, 0xc0000000) - - - - - - - - - - -(c) Operation-2 : use GDB to pause at here, then DMA read IOVA=0, sometimes DMA success (as expected), but sometimes DMA error (report not-present). Operation-3 : dump DMAR table Operation-4 (when DMA error) : please see below DMA UNMAP (0, 0xc0000000) - - - - - - - - -(d) } The DMAR table of Operation-1 is (only show the entries about IOVA 0): PML4: 0x 1a34fbb003 PDPE: 0x 1a34fbb003 PDE: 0x 1a34fbf003 PTE: 0x 0 And the table of Operation-3 is: PML4: 0x 1a34fbb003 PDPE: 0x 1a34fbb003 PDE: 0x 15ec00883 < - - 2M superpage So we can see the IOVA 0 is mapped, but the DMA read is error: dmar_fault: 131757 callbacks suppressed DRHD: handling fault status reg 402 [DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set [DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set DRHD: handling fault status reg 600 DRHD: handling fault status reg 602 [DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set [DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set [DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set NOTE, the magical thing happen...(*Operation-4*) we write the PTE of Operation-1 from 0 to 0x3 which means can Read/Write, and then we trigger DMA read again, it success and return the data of HPA 0 !! Why we modify the older page table would make sense ? As we have discussed previously, the cache flush part of the driver is correct, it call flush_iotlb after (b) and no need to flush after (c). But the result of the experiment shows the older page table or older caches is effective actually. Any ideas ? > Best regards, > baolu