On Fri, Jan 20, 2017 at 10:14:01AM -0700, Alex Williamson wrote: > On Fri, 20 Jan 2017 20:27:18 +0800 > Peter Xu <pet...@redhat.com> wrote: > > > On Fri, Jan 20, 2017 at 11:43:28AM +0800, Peter Xu wrote: > > > > [...] > > > > > > What I don't want to see is for this API bug to leak out into the rest > > > > of the QEMU code such that intel_iommu code, or iommu code in general > > > > subtly avoids it by artificially using a smaller range. VT-d hardware > > > > has an actual physical address space of either 2^39 or 2^48 bits, so if > > > > you want to make the iommu address space match the device we're trying > > > > to emulate, that's perfectly fine. AIUI, AMD-Vi does actually have a > > > > 64-bit address space on the IOMMU, so to handle that case I'd expect > > > > the simplest solution would be to track the and mapped iova high water > > > > mark per container in vfio and truncate unmaps to that high water end > > > > address. Realistically we're probably not going to see iovas at the end > > > > of the 64-bit address space, but we can come up with some other > > > > workaround in the vfio code or update the kernel API if we do. Thanks, > > > > > > > > > > Agree that high watermark can be a good solution for VT-d. I'll use > > > that instead of 2^63-1. > > > > Okay when I replied I didn't notice this "watermark" may need more > > than several (even tens of) LOCs. :( > > > > Considering that I see no further usage of this watermark, I'm > > thinking whether it's okay I directly use (1ULL << VTD_MGAW) here as > > the watermark - it's simple, efficient and secure imho. > > Avoiding the issue based on the virtual iommu hardware properties is a > fine solution, my intention was only to discourage introduction of > artificial limitations in the surrounding code to avoid this vfio > issue. Thanks,
Yes. I have posted a new version of the vfio series. Looking forward to your further comment (or ack, if with luck :) on v4. Thanks, -- peterx