On Tue, Jan 16, 2024 at 10:37:48AM -0300, Fabiano Rosas wrote:
> Peter Xu <pet...@redhat.com> writes:
> 
> > On Mon, Nov 27, 2023 at 05:25:55PM -0300, Fabiano Rosas wrote:
> >> Allow multifd to open file-backed channels. This will be used when
> >> enabling the fixed-ram migration stream format which expects a
> >> seekable transport.
> >> 
> >> The QIOChannel read and write methods will use the preadv/pwritev
> >> versions which don't update the file offset at each call so we can
> >> reuse the fd without re-opening for every channel.
> >> 
> >> Note that this is just setup code and multifd cannot yet make use of
> >> the file channels.
> >> 
> >> Signed-off-by: Fabiano Rosas <faro...@suse.de>
> >> ---
> >> - open multifd channels with O_WRONLY and no mode
> >> - stop cancelling migration and propagate error via qio_task
> >> ---
> >>  migration/file.c      | 47 +++++++++++++++++++++++++++++++++++++++++--
> >>  migration/file.h      |  5 +++++
> >>  migration/multifd.c   | 14 +++++++++++--
> >>  migration/options.c   |  7 +++++++
> >>  migration/options.h   |  1 +
> >>  migration/qemu-file.h |  1 -
> >>  6 files changed, 70 insertions(+), 5 deletions(-)
> >> 
> >> diff --git a/migration/file.c b/migration/file.c
> >> index 5d4975f43e..67d6f42da7 100644
> >> --- a/migration/file.c
> >> +++ b/migration/file.c
> >> @@ -17,6 +17,10 @@
> >>  
> >>  #define OFFSET_OPTION ",offset="
> >>  
> >> +static struct FileOutgoingArgs {
> >> +    char *fname;
> >> +} outgoing_args;
> >> +
> >>  /* Remove the offset option from @filespec and return it in @offsetp. */
> >>  
> >>  int file_parse_offset(char *filespec, uint64_t *offsetp, Error **errp)
> >> @@ -36,6 +40,42 @@ int file_parse_offset(char *filespec, uint64_t 
> >> *offsetp, Error **errp)
> >>      return 0;
> >>  }
> >>  
> >> +static void qio_channel_file_connect_worker(QIOTask *task, gpointer 
> >> opaque)
> >> +{
> >> +    /* noop */
> >> +}
> >> +
> >> +int file_send_channel_destroy(QIOChannel *ioc)
> >> +{
> >> +    if (ioc) {
> >> +        qio_channel_close(ioc, NULL);
> >> +        object_unref(OBJECT(ioc));
> >> +    }
> >> +    g_free(outgoing_args.fname);
> >> +    outgoing_args.fname = NULL;
> >> +
> >> +    return 0;
> >> +}
> >> +
> >> +void file_send_channel_create(QIOTaskFunc f, void *data)
> >> +{
> >> +    QIOChannelFile *ioc;
> >> +    QIOTask *task;
> >> +    Error *err = NULL;
> >> +    int flags = O_WRONLY;
> >> +
> >> +    ioc = qio_channel_file_new_path(outgoing_args.fname, flags, 0, &err);
> >> +
> >> +    task = qio_task_new(OBJECT(ioc), f, (gpointer)data, NULL);
> >> +    if (!ioc) {
> >> +        qio_task_set_error(task, err);
> >> +        return;
> >> +    }
> >> +
> >> +    qio_task_run_in_thread(task, qio_channel_file_connect_worker,
> >> +                           (gpointer)data, NULL, NULL);
> >
> > This is pretty weird.  This invokes a thread, but it'll run a noop.  It
> > seems meaningless to me.
> >
> 
> That's QIOTask weirdness isn't it? It will run the worker in the thread,
> but it also schedules the completion function as a glib event. So that's
> when multifd_new_send_channel_async() will run. The crucial aspect here
> is that it gets dispatched by glib on the main loop. I'm just keeping
> the model, except that I don't have work to do during the "connection"
> phase.

The question is why do we need that if "file:" can be done synchronously.
Please see below.

> 
> > I assume you wanted to keep using the same async model as the socket typed
> > multifd, but I don't think that works anyway, because file open blocks at
> > qio_channel_file_new_path() so it's sync anyway.
> 
> It's async regarding multifd_channel_connect(). The connections will be
> happening while multifd_save_setup() continues execution, IIUC.

Yes.  But I'm wondering whether we can start to simplify at least the
"file:" for this process.  We all know that we _may_ have created too many
threads each doing very light work, which might not be needed.  We haven't
yet resolved the "how to kill a thread along this process if migration
cancels during when one thread got blocked in a syscall" issue.  We'll need
to start recording tids for every thread, and that'll be a mess for sure
when there're tons of threads.

> 
> >
> > AFAICT we still share the code, as long as the file path properly invokes
> > multifd_channel_connect() after the iochannel is setup.
> >
> 
> I don't see the point in moving any of that logic into the URI
> implementation. We already have the TLS handshake code which can also
> call multifd_channel_connect() and that is a mess. IMO we should be
> keeping the interface between multifd and the frontends as boilerplate
> as possible.

Hmm, I don't think it's a mess?  At least multifd_channel_connect(). AFAICT
multifd_channel_connect() can be called in any context.

multifd_channel_connect() always creates yet another thread, no matter it's
for tls handshake, or it's one of the multifd send thread.

Here this series already treat file/socket differently:

static void multifd_new_send_channel_create(gpointer opaque)
{
    if (migrate_to_file()) {
        file_send_channel_create(multifd_new_send_channel_async, opaque);
    } else {
        socket_send_channel_create(multifd_new_send_channel_async, opaque);
    }
}

What I am thinking is it could be much simpler if
multifd_new_send_channel_create() can create the multifd channels
synchronously here, then directly call multifd_channel_connect(), further
that'll create threads for whatever purposes.

When TLS is not enabled, I'd expect if with that change and with a "file:"
URI, after multifd_save_setup() completes, all send threads will be created
already.

I think multifd_new_send_channel_create() can already take
"MultiFDSendParams *p" as parameter, then:

static void multifd_new_send_channel_create(MultiFDSendParams *p)
{
    if (migrate_to_file()) {
        file_send_channel_create(p);
    } else {
        socket_send_channel_create(multifd_new_send_channel_async, p);
    }
}

Where file_send_channel_create() can call multifd_channel_connect()
directly upon the ioc created.

Would that work for us, and much cleaner?

-- 
Peter Xu


Reply via email to