On Mon, Aug 5, 2019 at 8:34 AM Wei Yang <richardw.y...@linux.intel.com> wrote:
>
> On Fri, Aug 02, 2019 at 06:18:41PM +0800, Ivan Ren wrote:
> >From: Ivan Ren <ivan...@tencent.com>
> >
> >This patch fix a multifd migration bug in migration speed calculation, this
> >problem can be reproduced as follows:
> >1. start a vm and give a heavy memory write stress to prevent the vm be
> >   successfully migrated to destination
> >2. begin a migration with multifd
> >3. migrate for a long time [actually, this can be measured by transferred 
> >bytes]
> >4. migrate cancel
> >5. begin a new migration with multifd, the migration will directly run into
> >   migration_completion phase
> >
> >Reason as follows:
> >
> >Migration update bandwidth and s->threshold_size in function
> >migration_update_counters after BUFFER_DELAY time:
> >
> >    current_bytes = migration_total_bytes(s);
> >    transferred = current_bytes - s->iteration_initial_bytes;
> >    time_spent = current_time - s->iteration_start_time;
> >    bandwidth = (double)transferred / time_spent;
> >    s->threshold_size = bandwidth * s->parameters.downtime_limit;
> >
> >In multifd migration, migration_total_bytes function return
> >qemu_ftell(s->to_dst_file) + ram_counters.multifd_bytes.
> >s->iteration_initial_bytes will be initialized to 0 at every new migration,
> >but ram_counters is a global variable, and history migration data will be
> >accumulated. So if the ram_counters.multifd_bytes is big enough, it may lead
> >pending_size >= s->threshold_size become false in migration_iteration_run
> >after the first migration_update_counters.
> >
> >Signed-off-by: Ivan Ren <ivan...@tencent.com>
> >Reviewed-by: Juan Quintela <quint...@redhat.com>
> >Suggested-by: Wei Yang <richardw.y...@linux.intel.com>
> >---
> >v2->v3:
> >- fix the bug of update_iteration_initial_status function prototype
> >
>
> Code looks good. Have you verified on this version?
Yes

> BTW, you didn't address the multifd count in this patch, right?

Yes.
Currently multifd page count has no harm, so I think it's better to
optimize it in a new patch to make things clearer.

Thanks.

Reply via email to