Peter Xu <pet...@redhat.com> writes:

> On Fri, Jan 26, 2024 at 07:19:39PM -0300, Fabiano Rosas wrote:
>> +static MultiFDMethods multifd_socket_ops = {
>> +    .send_setup = multifd_socket_send_setup,
>> +    .send_cleanup = multifd_socket_send_cleanup,
>> +    .send_prepare = multifd_socket_send_prepare,
>
> Here it's named with "socket", however not all socket-based multifd
> migrations will go into this route, e.g., when zstd compression enabled it
> will not go via this route, even if zstd also uses sockets as transport.
> From that pov, this may be slightly confusing.  Maybe it suites more to be
> called "socket_plain" / "socket_no_comp"?
>
> One step back, I had a feeling that the current proposal tried to provide a
> single ->ops to cover a model where we may need more than one layer of
> abstraction.
>
> Since it might be helpful to allow multifd send arbitrary data (e.g. for
> VFIO?  Avihai might have an answer there..), I'll try to even consider that
> into the picture.
>
> Let's consider the ultimate goal of multifd, where the simplest model could
> look like this in my mind (I'm only discussing sender side, but it'll be
> similar on recv side):
>
>                prepare()           send()
>   Input   ----------------> IOVs ------------> iochannels
>
> [I used prepare/send, but please think them as generic terms, not 100%
>  aligned with what we have with existing multifd_ops, or what you proposed
>  later]
>
> Here what are sure, IMHO, is:
>
>   - We always can have some input data to dump; I didn't use "guest pages"
>     just to say we may allow arbitrary data.  For any multifd user that
>     would like to dump arbitrary data, they can already provide IOVs, so
>     here input can be either "MultiFDPages_t" or "IOVs".

Or anything else, since the client code also has control over send(),
no? So it could give multifd a pointer to some memory and then use
send() to do whatever it wants with it. Multifd is just providing worker
threads and "scheduling".

Also note that multifd clients currently _do not_ provide IOVs. They
merely provide data to multifd (p->pages) and then convert that data
into IOVs at prepare(). This is different, because multifd currently
holds that p->pages (and turns that into p->normal), which means the
client code does not need to store the data across iterations (in the
case of RAM which is iterative).

>
>   - We may always want to have IOVs to represent the buffers at some point,
>     no matter what the input it
>
>   - We always flush the IOVs to iochannels; basically I want to say we can
>     always assume the last layer is connecting to QIOChannel APIs, while I
>     don't think there's outliers here so far, even if the send() may differ.
>
> Then _maybe_ it's clearer that we can have two layers of OPs?
>
>   - prepare(): it tells how the "input" will be converted into a scatter
>     gatter list of buffers.  All compression methods fall into this afaiu.
>     This has _nothing_ to do on how the buffers will be sent.  For
>     arbitrary-typed input, this can already be a no-op since the IOVs
>     provided can already be passed over to send().
>
>   - send(): how to dump the IOVs to the iochannels.  AFAIU this is motly
>     only useful for fixed-ram migrations.
>
> Would this be clearer, rather than keep using a single multifd_ops?

Sorry, I don't see how what you describe is any different than what we
have. And I don't see how any of this would mean more than one
multifd_ops. We already have multifd_ops->prepare() and
multifd_ops->send(). What am I missing?

Reply via email to