* Juan Quintela (quint...@redhat.com) wrote: > We have to flush() the QEMUFile because now we sent really few data > through that channel.
I think this makes sense, so Reviewed-by: Dr. David Alan Gilbert <dgilb...@redhat.com> However, I think again it could really be folded into an earlier patch, except the fflush's themselves which do make sense here. Dave > Signed-off-by: Juan Quintela <quint...@redhat.com> > --- > migration/ram.c | 11 +++-------- > 1 file changed, 3 insertions(+), 8 deletions(-) > > diff --git a/migration/ram.c b/migration/ram.c > index 23203756b7..f5cff2eb59 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -1634,20 +1634,12 @@ static int ram_multifd_page(RAMState *rs, > PageSearchStatus *pss, > bool last_stage) > { > int pages; > - uint8_t *p; > RAMBlock *block = pss->block; > ram_addr_t offset = pss->page << TARGET_PAGE_BITS; > > - p = block->host + offset; > - > pages = save_zero_page(rs, block, offset); > if (pages == -1) { > - ram_counters.transferred += > - save_page_header(rs, rs->f, block, > - offset | RAM_SAVE_FLAG_PAGE); > multifd_queue_page(block, offset); > - qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE); > - ram_counters.transferred += TARGET_PAGE_SIZE; > pages = 1; > ram_counters.normal++; > } > @@ -2869,6 +2861,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque) > > multifd_send_sync_main(); > qemu_put_be64(f, RAM_SAVE_FLAG_EOS); > + qemu_fflush(f); > > return 0; > } > @@ -2946,6 +2939,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) > multifd_send_sync_main(); > out: > qemu_put_be64(f, RAM_SAVE_FLAG_EOS); > + qemu_fflush(f); > ram_counters.transferred += 8; > > ret = qemu_file_get_error(f); > @@ -2999,6 +2993,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque) > > multifd_send_sync_main(); > qemu_put_be64(f, RAM_SAVE_FLAG_EOS); > + qemu_fflush(f); > > return 0; > } > -- > 2.17.0 > -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK