Current memory operations like pinning may take a lot of time at the destination. Currently they are done after the source of the migration is stopped, and before the workload is resumed at the destination. This is a period where neigher traffic can flow, nor the VM workload can continue (downtime).
We can do better as we know the memory layout of the guest RAM at the destination from the moment that all devices are initializaed. So moving that operation allows QEMU to communicate the kernel the maps while the workload is still running in the source, so Linux can start mapping them. As a small drawback, there is a time in the initialization where QEMU cannot respond to QMP etc. By some testing, this time is about 0.2seconds. This may be further reduced (or increased) depending on the vdpa driver and the platform hardware, and it is dominated by the cost of memory pinning. This matches the time that we move out of the called downtime window. The guest-visible downtime is measured as the elapsed trace time between the last vhost_vdpa_suspend on the source and the last vhost_vdpa_set_vring_enable_one on the destination. In other words, from "guest CPUs freeze" to the instant the final Rx/Tx queue-pair is able to start moving data. Using ConnectX-6 Dx (MLX5) NICs in vhost-vDPA mode with 8 queue-pairs, the series reduces guest-visible downtime during back-to-back live migrations by more than half: - 39G VM: 4.72s -> 2.09s (-2.63s, ~56% improvement) - 128G VM: 14.72s -> 5.83s (-8.89s, ~60% improvement) Future directions on top of this series may include to move more things ahead of the migration time, like set DRIVER_OK or perform actual iterative migration of virtio-net devices. Comments are welcome. This series is a different approach of series [1]. As the title does not reflect the changes anymore, please refer to the previous one to know the series history. This series is based on [2] and has already been merged. [Jonah Palmer] This series was rebased after [3] was pulled in, as [3] was a prerequisite fix for this series. v5: --- * Update performance metrics from the change in v4. v4: --- * Add memory listener unregistration to vhost_vdpa_reset_device. * Remove memory listener unregistration from vhost_vdpa_reset_status. v3: --- * Rebase v2: --- * Move the memory listener registration to vhost_vdpa_set_owner function. * Move the iova_tree allocation to net_vhost_vdpa_init. v1 at https://lists.gnu.org/archive/html/qemu-devel/2024-01/msg02136.html. [1] https://patchwork.kernel.org/project/qemu-devel/cover/20231215172830.2540987-1-epere...@redhat.com/ [2] https://lists.gnu.org/archive/html/qemu-devel/2024-01/msg05910.html [3] https://lore.kernel.org/qemu-devel/20250217144936.3589907-1-jonah.pal...@oracle.com/ Eugenio Pérez (7): vdpa: check for iova tree initialized at net_client_start vdpa: reorder vhost_vdpa_set_backend_cap vdpa: set backend capabilities at vhost_vdpa_init vdpa: add listener_registered vdpa: reorder listener assignment vdpa: move iova_tree allocation to net_vhost_vdpa_init vdpa: move memory listener register to vhost_vdpa_init hw/virtio/vhost-vdpa.c | 107 +++++++++++++++++++++------------ include/hw/virtio/vhost-vdpa.h | 22 ++++++- net/vhost-vdpa.c | 34 +---------- 3 files changed, 93 insertions(+), 70 deletions(-) -- 2.43.5