There're a few things off here in that logic, rewrite it. When at it, add rich comment to explain each of the decisions.
Since this is very sensitive path for migration, below are the list of things changed with their reasonings. (1) Exact pending size is only needed for precopy not postcopy Fundamentally it's because "exact" version only does one more deep sync to fetch the pending results, while in postcopy's case it's never going to sync anything more than estimate as the VM on source is stopped. (2) Do _not_ rely on threshold_size anymore to decide whether postcopy should complete threshold_size was calculated from the expected downtime and bandwidth only during precopy as an efficient way to decide when to switchover. It's not sensible to rely on threshold_size in postcopy. For precopy, if switchover is decided, the migration will complete soon. It's not true for postcopy. Logically speaking, postcopy should only complete the migration if all pending data is flushed. Here it used to work because save_complete() used to implicitly contain save_live_iterate() when there's pending size. Even if that looks benign, having RAMs to be migrated in postcopy's save_complete() has other bad side effects: (a) Since save_complete() needs to be run once at a time, it means when moving RAM there's no way moving other things (rather than round-robin iterating the vmstate handlers like what we do with ITERABLE phase). Not an immediate concern, but it may stop working in the future when there're more than one iterables (e.g. vfio postcopy). (b) postcopy recovery, unfortunately, only works during ITERABLE phase. IOW, if src QEMU moves RAM during postcopy's save_complete() and network failed, then it'll crash both QEMUs... OTOH if it failed during iteration it'll still be recoverable. IOW, this change should further reduce the window QEMU split brain and crash in extreme cases. If we enable the ram_save_complete() tracepoints, we'll see this before this patch: 1267959@1748381938.294066:ram_save_complete dirty=9627, done=0 1267959@1748381938.308884:ram_save_complete dirty=0, done=1 It means in this migration there're 9627 pages migrated at complete() of postcopy phase. After this change, all the postcopy RAM should be migrated in iterable phase, rather than save_complete(): 1267959@1748381938.294066:ram_save_complete dirty=0, done=0 1267959@1748381938.308884:ram_save_complete dirty=0, done=1 (3) Adjust when to decide to switch to postcopy This shouldn't be super important, the movement makes sure there's only one in_postcopy check, then we are clear on what we do with the two completely differnt use cases (precopy v.s. postcopy). (4) Trivial touch up on threshold_size comparision Which changes: "(!pending_size || pending_size < s->threshold_size)" into: "(pending_size <= s->threshold_size)" Reviewed-by: Juraj Marcin <jmar...@redhat.com> Signed-off-by: Peter Xu <pet...@redhat.com> --- migration/migration.c | 56 +++++++++++++++++++++++++++++++------------ 1 file changed, 41 insertions(+), 15 deletions(-) diff --git a/migration/migration.c b/migration/migration.c index e33e39ac74..1a26a4bfef 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -3436,33 +3436,59 @@ static MigIterateState migration_iteration_run(MigrationState *s) Error *local_err = NULL; bool in_postcopy = s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE; bool can_switchover = migration_can_switchover(s); + bool complete_ready; + /* Fast path - get the estimated amount of pending data */ qemu_savevm_state_pending_estimate(&must_precopy, &can_postcopy); pending_size = must_precopy + can_postcopy; trace_migrate_pending_estimate(pending_size, must_precopy, can_postcopy); - if (pending_size < s->threshold_size) { - qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy); - pending_size = must_precopy + can_postcopy; - trace_migrate_pending_exact(pending_size, must_precopy, can_postcopy); + if (in_postcopy) { + /* + * Iterate in postcopy until all pending data flushed. Note that + * postcopy completion doesn't rely on can_switchover, because when + * POSTCOPY_ACTIVE it means switchover already happened. + */ + complete_ready = !pending_size; + } else { + /* + * Exact pending reporting is only needed for precopy. Taking RAM + * as example, there'll be no extra dirty information after + * postcopy started, so ESTIMATE should always match with EXACT + * during postcopy phase. + */ + if (pending_size < s->threshold_size) { + qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy); + pending_size = must_precopy + can_postcopy; + trace_migrate_pending_exact(pending_size, must_precopy, can_postcopy); + } + + /* Should we switch to postcopy now? */ + if (must_precopy <= s->threshold_size && + can_switchover && qatomic_read(&s->start_postcopy)) { + if (postcopy_start(s, &local_err)) { + migrate_set_error(s, local_err); + error_report_err(local_err); + } + return MIG_ITERATE_SKIP; + } + + /* + * For precopy, migration can complete only if: + * + * (1) Switchover is acknowledged by destination + * (2) Pending size is no more than the threshold specified + * (which was calculated from expected downtime) + */ + complete_ready = can_switchover && (pending_size <= s->threshold_size); } - if ((!pending_size || pending_size < s->threshold_size) && can_switchover) { + if (complete_ready) { trace_migration_thread_low_pending(pending_size); migration_completion(s); return MIG_ITERATE_BREAK; } - /* Still a significant amount to transfer */ - if (!in_postcopy && must_precopy <= s->threshold_size && can_switchover && - qatomic_read(&s->start_postcopy)) { - if (postcopy_start(s, &local_err)) { - migrate_set_error(s, local_err); - error_report_err(local_err); - } - return MIG_ITERATE_SKIP; - } - /* Just another iteration step */ qemu_savevm_state_iterate(s->to_dst_file, in_postcopy); return MIG_ITERATE_RESUME; -- 2.49.0