Hi, Zhenzhong,

On Tue, May 23, 2023 at 04:07:02PM +0800, Zhenzhong Duan wrote:
> Commit 63b88968f1 ("intel-iommu: rework the page walk logic") adds logic
> to record mapped IOVA ranges so we only need to send MAP or UNMAP when
> necessary. But there are still a few corner cases of unnecessary UNMAP.
> 
> One is address space switch. During switching to iommu address space,
> all the original mappings have been dropped by VFIO memory listener,
> we don't need to unmap again in replay. The other is invalidation,
> we only need to unmap when there are recorded mapped IOVA ranges,
> presuming most of OSes allocating IOVA range continuously,
> ex. on x86, linux sets up mapping from 0xffffffff downwards.
> 
> Signed-off-by: Zhenzhong Duan <zhenzhong.d...@intel.com>
> ---
> Tested on x86 with a net card passed or hotpluged to kvm guest,
> ping/ssh pass.

Since this is a performance related patch, do you have any number to show
the effect?

> 
>  hw/i386/intel_iommu.c | 31 ++++++++++++++-----------------
>  1 file changed, 14 insertions(+), 17 deletions(-)
> 
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 94d52f4205d2..6afd6428aaaa 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -3743,6 +3743,7 @@ static void vtd_address_space_unmap(VTDAddressSpace 
> *as, IOMMUNotifier *n)
>      hwaddr start = n->start;
>      hwaddr end = n->end;
>      IntelIOMMUState *s = as->iommu_state;
> +    IOMMUTLBEvent event;
>      DMAMap map;
>  
>      /*
> @@ -3762,22 +3763,25 @@ static void vtd_address_space_unmap(VTDAddressSpace 
> *as, IOMMUNotifier *n)
>      assert(start <= end);
>      size = remain = end - start + 1;
>  
> +    event.type = IOMMU_NOTIFIER_UNMAP;
> +    event.entry.target_as = &address_space_memory;
> +    event.entry.perm = IOMMU_NONE;
> +    /* This field is meaningless for unmap */
> +    event.entry.translated_addr = 0;
> +
>      while (remain >= VTD_PAGE_SIZE) {
> -        IOMMUTLBEvent event;
>          uint64_t mask = dma_aligned_pow2_mask(start, end, s->aw_bits);
>          uint64_t size = mask + 1;
>  
>          assert(size);
>  
> -        event.type = IOMMU_NOTIFIER_UNMAP;
> -        event.entry.iova = start;
> -        event.entry.addr_mask = mask;
> -        event.entry.target_as = &address_space_memory;
> -        event.entry.perm = IOMMU_NONE;
> -        /* This field is meaningless for unmap */
> -        event.entry.translated_addr = 0;
> -
> -        memory_region_notify_iommu_one(n, &event);
> +        map.iova = start;
> +        map.size = size;
> +        if (iova_tree_find(as->iova_tree, &map)) {
> +            event.entry.iova = start;
> +            event.entry.addr_mask = mask;
> +            memory_region_notify_iommu_one(n, &event);
> +        }

This one looks fine to me, but I'm not sure how much benefit we'll get here
either as this path should be rare afaiu.

>  
>          start += size;
>          remain -= size;
> @@ -3826,13 +3830,6 @@ static void vtd_iommu_replay(IOMMUMemoryRegion 
> *iommu_mr, IOMMUNotifier *n)
>      uint8_t bus_n = pci_bus_num(vtd_as->bus);
>      VTDContextEntry ce;
>  
> -    /*
> -     * The replay can be triggered by either a invalidation or a newly
> -     * created entry. No matter what, we release existing mappings
> -     * (it means flushing caches for UNMAP-only registers).
> -     */
> -    vtd_address_space_unmap(vtd_as, n);

IIUC this is needed to satisfy current replay() semantics:

    /**
     * @replay:
     *
     * Called to handle memory_region_iommu_replay().
     *
     * The default implementation of memory_region_iommu_replay() is to
     * call the IOMMU translate method for every page in the address space
     * with flag == IOMMU_NONE and then call the notifier if translate
     * returns a valid mapping. If this method is implemented then it
     * overrides the default behaviour, and must provide the full semantics
     * of memory_region_iommu_replay(), by calling @notifier for every
     * translation present in the IOMMU.

The problem is vtd_page_walk() currently by default only notifies on page
changes, so we'll notify all MAP only if we unmap all of them first.

I assumed it was not a major issue with/without it before because
previously AFAIU the major path to trigger this is when someone hot plug a
vfio-pci into an existing guest IOMMU domain, so the unmap_all() is indeed
no-op.  However from semantics level it seems unmap_all() is still needed.

The other thing is when I am looking at the new code I found that we
actually extended the replay() to be used also in dirty tracking of vfio,
in vfio_sync_dirty_bitmap().  For that maybe it's already broken if
unmap_all() because afaiu log_sync() can be called in migration thread
anytime during DMA so I think it means the device is prone to DMA with the
IOMMU pgtable quickly erased and rebuilt here, which means the DMA could
fail unexpectedly.  Copy Alex, Kirti and Neo.

Perhaps to fix it we'll need to teach the vtd pgtable walker to notify all
existing MAP events without touching the IOVA tree at all.

> -
>      if (vtd_dev_to_context_entry(s, bus_n, vtd_as->devfn, &ce) == 0) {
>          trace_vtd_replay_ce_valid(s->root_scalable ? "scalable mode" :
>                                    "legacy mode",
> -- 
> 2.34.1
> 

-- 
Peter Xu


Reply via email to