On Mon, Jul 22, 2024 at 02:59:05PM -0300, Fabiano Rosas wrote: > Hi, > > In this v2 I took Peter's suggestion of keeping the channels' pointers > and moving only the extra slot. The major changes are in patches 5 and > 9. Patch 3 introduces the structure: > > typedef enum { > MULTIFD_PAYLOAD_NONE, > MULTIFD_PAYLOAD_RAM, > } MultiFDPayloadType; > > struct MultiFDSendData { > MultiFDPayloadType type; > union { > MultiFDPages_t ram; > } u; > }; > > I added a NONE type so we can use it to tell when the channel has > finished sending a packet, since we'll need to switch types between > clients anyway. This avoids having to introduce a 'size', or 'free' > variable.
This at least looks better to me, thanks. > > WHAT'S MISSING: > > - The support for calling multifd_send() concurrently. Maciej has this > in his series so I didn't touch it. > > - A way of adding methods for the new payload type. Currently, the > compression methods are somewhat coupled with ram migration, so I'm > not sure how to proceed. What is this one? Why compression methods need new payload? Aren't they ram-typed? > > - Moving all the multifd ram code into multifd-ram.c after this^ is > sorted out. > > CI run: https://gitlab.com/farosas/qemu/-/pipelines/1381005020 > > v1: > https://lore.kernel.org/r/20240620212111.29319-1-faro...@suse.de > > First of all, apologies for the roughness of the series. I'm off for > the next couple of weeks and wanted to put something together early > for your consideration. PS: I assume this is old content, or I'll envy you on how frequent you can take days off!.. > > This series is a refactoring (based on an earlier, off-list > attempt[0]), aimed to remove the usage of the MultiFDPages_t type in > the multifd core. If we're going to add support for more data types to > multifd, we first need to clean that up. > > This time around this work was prompted by Maciej's series[1]. I see > you're having to add a bunch of is_device_state checks to work around > the rigidity of the code. > > Aside from the VFIO work, there is also the intent (coming back from > Juan's ideas) to make multifd the default code path for migration, > which will have to include the vmstate migration and anything else we > put on the stream via QEMUFile. > > I have long since been bothered by having 'pages' sprinkled all over > the code, so I might be coming at this with a bit of a narrow focus, > but I believe in order to support more types of payloads in multifd, > we need to first allow the scheduling at multifd_send_pages() to be > independent of MultiFDPages_t. So here it is. Let me know what you > think. > > (as I said, I'll be off for a couple of weeks, so feel free to > incorporate any of this code if it's useful. Or to ignore it > completely). > > CI run: https://gitlab.com/farosas/qemu/-/pipelines/1340992028 > > 0- https://github.com/farosas/qemu/commits/multifd-packet-cleanups/ > 1- https://lore.kernel.org/r/cover.1718717584.git.maciej.szmigi...@oracle.com > > Fabiano Rosas (9): > migration/multifd: Reduce access to p->pages > migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov > migration/multifd: Introduce MultiFDSendData > migration/multifd: Make MultiFDPages_t:offset a flexible array member > migration/multifd: Replace p->pages with an union pointer > migration/multifd: Move pages accounting into > multifd_send_zero_page_detect() > migration/multifd: Isolate ram pages packet data > migration/multifd: Don't send ram data during SYNC > migration/multifd: Replace multifd_send_state->pages with client data > > migration/file.c | 3 +- > migration/file.h | 2 +- > migration/multifd-qpl.c | 10 +- > migration/multifd-uadk.c | 9 +- > migration/multifd-zero-page.c | 9 +- > migration/multifd-zlib.c | 4 +- > migration/multifd-zstd.c | 4 +- > migration/multifd.c | 239 +++++++++++++++++++++------------- > migration/multifd.h | 37 ++++-- > migration/ram.c | 1 + > 10 files changed, 201 insertions(+), 117 deletions(-) > > > base-commit: a7ddb48bd1363c8bcdf42776d320289c42191f01 > -- > 2.35.3 > -- Peter Xu