On Mon, Jun 24, 2019 at 12:09:48PM +0200, Auger Eric wrote: > Hi Peter, > > On 6/24/19 11:18 AM, Peter Xu wrote: > > This is an replacement work of Yan Zhao's patch: > > > > https://www.mail-archive.com/qemu-devel@nongnu.org/msg625340.html > > > > vtd_address_space_unmap() will do proper page mask alignment to make > > sure each IOTLB message will have correct masks for notification > > messages (2^N-1), but sometimes it can be expanded to even supercede > > the registered range. That could lead to unexpected UNMAP of already > > mapped regions in some other notifiers. > > > > Instead of doing mindless expension of the start address and address > > mask, we split the range into smaller ones and guarantee that each > > small range will have correct masks (2^N-1) and at the same time we > > should also try our best to generate as less IOTLB messages as > > possible. > > > > Reported-by: Yan Zhao <yan.y.z...@intel.com> > > Signed-off-by: Peter Xu <pet...@redhat.com> > > --- > > hw/i386/intel_iommu.c | 67 ++++++++++++++++++++++++++----------------- > > 1 file changed, 41 insertions(+), 26 deletions(-) > > > > diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c > > index 719ce19ab3..de86f53b4e 100644 > > --- a/hw/i386/intel_iommu.c > > +++ b/hw/i386/intel_iommu.c > > @@ -3363,11 +3363,28 @@ VTDAddressSpace *vtd_find_add_as(IntelIOMMUState > > *s, PCIBus *bus, int devfn) > > return vtd_dev_as; > > } > > > > +static uint64_t get_naturally_aligned_size(uint64_t start, > > + uint64_t size, int gaw) > > +{ > > + uint64_t max_mask = 1ULL << gaw; > > + uint64_t alignment = start ? start & -start : max_mask; > > + > > + alignment = MIN(alignment, max_mask); > > + size = MIN(size, max_mask); > this does not not prevent from invalidating beyond gaw if start != 0, right?
Yes. But at the start of vtd_address_space_unmap(), we have: if (end > VTD_ADDRESS_SIZE(s->aw_bits) - 1) { /* * Don't need to unmap regions that is bigger than the whole * VT-d supported address space size */ end = VTD_ADDRESS_SIZE(s->aw_bits) - 1; } So we don't need to worry about (start+size) exceeding GAW? [1] > > + > > + if (alignment <= size) { > > + /* Increase the alignment of start */ > I don't really get this comment This comment comes from Paolo, but I'll try to explain - it tries to mean that this "alignment" will be used as an increasement to "start" variable, so finally variable "start" will align with larger mask size. Better comments welcomed... :) > > + return alignment; > > + } else { > > + /* Find the largest page mask from size */ > > + return 1ULL << (63 - clz64(size)); > > + }> +} > > + > > /* Unmap the whole range in the notifier's scope. */ > > static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n) > > { > > - IOMMUTLBEntry entry; > > - hwaddr size; > > + hwaddr size, remain; > > hwaddr start = n->start; > > hwaddr end = n->end; > > IntelIOMMUState *s = as->iommu_state; > > @@ -3388,39 +3405,37 @@ static void vtd_address_space_unmap(VTDAddressSpace > > *as, IOMMUNotifier *n) > > } > > > > assert(start <= end); > > - size = end - start; > > + size = remain = end - start + 1; > > > > - if (ctpop64(size) != 1) { > > - /* > > - * This size cannot format a correct mask. Let's enlarge it to > > - * suite the minimum available mask. > > - */ > > - int n = 64 - clz64(size); > > - if (n > s->aw_bits) { > > - /* should not happen, but in case it happens, limit it */ > > - n = s->aw_bits; > > - } > > - size = 1ULL << n; > > + while (remain >= VTD_PAGE_SIZE) { > Can't we stop as soon as entry.iova exceeds gaw as well? As explained at [1], I think we've already checked it. Thanks, -- Peter Xu