On 8/6/25 12:27 PM, Peter Xu wrote:
On Tue, Jul 22, 2025 at 12:41:26PM +0000, Jonah Palmer wrote:
Iterative live migration for virtio-net sends an initial
VMStateDescription while the source is still active. Because data
continues to flow for virtio-net, the guest's avail index continues to
increment after last_avail_idx had already been sent. This causes the
destination to often see something like this from virtio_error():

VQ 0 size 0x100 Guest index 0x0 inconsistent with Host index 0xc: delta 0xfff4

This is pretty much understanable, as vmstate_save() / vmstate_load() are,
IMHO, not designed to be used while VM is running.

To me, it's still illegal (per previous patch) to use vmstate_save_state()
while VM is running, in a save_setup() phase.

Yea I understand where you're coming from. It just seemed too good to pass up on as a way to send and receive the entire state of a device.

I felt that if I were to implement something similar for iterative migration only that I'd, more or less, be duplicating a lot of already existing code or vmstate logic.


Some very high level questions from migration POV:

- Have we figured out why the downtime can be shrinked just by sending the
   vmstate twice?

   If we suspect it's memory got preheated, have we tried other ways to
   simply heat the memory up on dest side?  For example, some form of
   mlock[all]()?  IMHO it's pretty important we figure out the root of why
   such optimization came from.

   I do remember we have downtime issue with number of max_vqueues that may
   cause post_load() to be slow, I wonder there're other ways to improve it
   instead of vmstate_save(), especially in setup phase.


Yea I believe that the downtime shrinks on the second vmstate_load_state due to preheated memory. But I'd like to stress that it's not my intention to resend the entire vmstate again during the stop-and-copy phase if iterative migration was used. A future iteration of this series will eventually include a more efficient approach to update the destination with any deltas since the vmstate was sent during the iterative portion (instead of just resending the entire vmstate again).

And yea there is an inefficiency regarding walking through VIRTIO_QUEUE_MAX (1024) VQs (twice with PCI) that I mentioned here in another comment: https://lore.kernel.org/qemu-devel/0f5b804d-3852-4159-b151-308a57f1e...@oracle.com/

This might be better handled in a separate series though rather than as part of this one.

- Normally devices need iterative phase because:

   (a) the device may contain huge amount of data to transfer

       E.g. RAM and VFIO are good examples and fall into this category.

   (b) the device states are "iterable" from concept

       RAM is definitely true.  VFIO somehow mimiced that even though it was
       a streamed binary protocol..

   What's the answer for virtio-net here?  How large is the device state?
   Is this relevant to vDPA and real hardware (so virtio-net can look
   similar to VFIO at some point)?


The main motivation behind implementing iterative migration for virtio-net is really to improve the guest visible downtime seen when migrating a vDPA device.

That is, by implementing iterative migration for virtio-net, we can see the state of the device early on and get a head start on work that's currently being done during the stop-and-copy phase. If we do this work before the stop-and-copy phase, we can further decrease the time spent in this window.

This would include work such as sending down the CVQ commands for queue-pair creation (even more beneficial for multiqueue), RSS, filters, etc.

I'm hoping to show this more explicitly in the next version of this RFC series that I'm working on now.

Thanks,



Reply via email to