On Mon, Oct 23, 2023 at 05:36:00PM -0300, Fabiano Rosas wrote:
> Currently multifd does not need to have knowledge of pages on the
> receiving side because all the information needed is within the
> packets that come in the stream.
> 
> We're about to add support to fixed-ram migration, which cannot use
> packets because it expects the ramblock section in the migration file
> to contain only the guest pages data.
> 
> Add a pointer to MultiFDPages in the multifd_recv_state and use the
> pages similarly to what we already do on the sending side. The pages
> are used to transfer data between the ram migration code in the main
> migration thread and the multifd receiving threads.
> 
> Signed-off-by: Fabiano Rosas <faro...@suse.de>

If it'll be new code to maintain anyway, I think we don't necessarily
always use multifd structs, right?

Rather than introducing MultiFDPages_t into recv side, can we allow pages
to be distributed in chunks of (ramblock, start_offset, end_offset) tuples?
That'll be much more efficient than per-page.  We don't need page granule
here on recv side, we want to load chunks of mem fast.

We don't even need page granule on sender side, but since only myself cared
about perf.. and obviously the plan is to even drop auto-pause, then VM can
be running there, so sender must do that per-page for now.  But now on recv
side VM must be stopped before all ram loaded, so there's no such problem.
And since we'll introduce new code anyway, IMHO we can decide how to do
that even if we want to reuse multifd.

Main thread can assign these (ramblock, start_offset, end_offset) jobs to
recv threads.  If ramblock is too small (e.g. 1M), assign it anyway to one
thread.  If ramblock is >512MB, cut it into slices and feed them to multifd
threads one by one.  All the rest can be the same.

Would that be better?  I would expect measurable loading speed difference
with much larger chunks and with that range-based tuples.

Thanks,

-- 
Peter Xu


Reply via email to