On 16/12/2024 09:46, Avihai Horon wrote: > During DMA unmap with vIOMMU, vfio_devices_all_running_and_mig_active() > is used to check whether a dirty page log sync of the unmapped pages is > required. Such log sync is needed during migration pre-copy phase, and > the current logic detects it by checking if migration is active and if > the VFIO devices are running. > > However, recently there has been an effort to simplify the migration > status API and reduce it to a single migration_is_running() function. > > To accommodate this, refactor vfio_devices_all_running_and_mig_active() > logic so it won't use migration_is_active(). > > Do it by modifying the logic to check if migration is running and dirty > tracking has been started. This should be equivalent to the previous > logic because when the guest is stopped there shouldn't be DMA unmaps > coming from it. Also rename the function properly. > > Signed-off-by: Avihai Horon <avih...@nvidia.com> > --- > include/hw/vfio/vfio-common.h | 3 +-- > hw/vfio/common.c | 28 ++++------------------------ > hw/vfio/container.c | 2 +- > 3 files changed, 6 insertions(+), 27 deletions(-) > > diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h > index e0ce6ec3a9..c23ca34871 100644 > --- a/include/hw/vfio/vfio-common.h > +++ b/include/hw/vfio/vfio-common.h > @@ -296,8 +296,7 @@ bool vfio_migration_realize(VFIODevice *vbasedev, Error > **errp); > void vfio_migration_exit(VFIODevice *vbasedev); > > int vfio_bitmap_alloc(VFIOBitmap *vbmap, hwaddr size); > -bool > -vfio_devices_all_running_and_mig_active(const VFIOContainerBase *bcontainer); > +bool vfio_dma_unmap_dirty_sync_needed(const VFIOContainerBase *bcontainer); > bool > vfio_devices_all_device_dirty_tracking(const VFIOContainerBase *bcontainer); > int vfio_devices_query_dirty_bitmap(const VFIOContainerBase *bcontainer, > diff --git a/hw/vfio/common.c b/hw/vfio/common.c > index a99796403e..81fba81a6f 100644 > --- a/hw/vfio/common.c > +++ b/hw/vfio/common.c > @@ -229,34 +229,14 @@ bool vfio_devices_all_device_dirty_tracking(const > VFIOContainerBase *bcontainer) > return true; > } > > -/* > - * Check if all VFIO devices are running and migration is active, which is > - * essentially equivalent to the migration being in pre-copy phase. > - */ > -bool > -vfio_devices_all_running_and_mig_active(const VFIOContainerBase *bcontainer) > +bool vfio_dma_unmap_dirty_sync_needed(const VFIOContainerBase *bcontainer) > { > - VFIODevice *vbasedev; > - > - if (!migration_is_active()) { > + if (!migration_is_running()) { > return false; > } > > - QLIST_FOREACH(vbasedev, &bcontainer->device_list, container_next) { > - VFIOMigration *migration = vbasedev->migration; > - > - if (!migration) { > - return false; > - } > - > - if (vfio_device_state_is_running(vbasedev) || > - vfio_device_state_is_precopy(vbasedev)) { > - continue; > - } else { > - return false; > - }
Functionally the change implies that even if non-migratable VFIO devices behind IOMMUs with dirty tracking would still sync DMA bitmap. I think this is OK as it increases the coverage for calc-dirty-rate (provided my comment in an earlier patch) such that if you try to get a dirty rate included the IOMMU invalidations marking the bits accordingly. Just stating the obvious in case this was non intended. > - } > - return true; > + return vfio_devices_all_device_dirty_tracking_started(bcontainer) || > + bcontainer->dirty_pages_started; > } > > static bool vfio_listener_skipped_section(MemoryRegionSection *section) > diff --git a/hw/vfio/container.c b/hw/vfio/container.c > index 9ccdb639ac..8107873534 100644 > --- a/hw/vfio/container.c > +++ b/hw/vfio/container.c > @@ -131,7 +131,7 @@ static int vfio_legacy_dma_unmap(const VFIOContainerBase > *bcontainer, > int ret; > Error *local_err = NULL; > > - if (iotlb && vfio_devices_all_running_and_mig_active(bcontainer)) { > + if (iotlb && vfio_dma_unmap_dirty_sync_needed(bcontainer)) { > if (!vfio_devices_all_device_dirty_tracking(bcontainer) && > bcontainer->dirty_pages_supported) { > return vfio_dma_unmap_bitmap(container, iova, size, iotlb); Reviewed-by: Joao Martins <joao.m.mart...@oracle.com>