Busy enterprise workloads hosted on large sized VM's tend to dirty memory faster than the transfer rate achieved via live guest migration. Despite some good recent improvements (& using dedicated 10Gig NICs between hosts) the live migration does NOT converge.
If a user chooses to force convergence of their migration via a new migration capability "auto-converge" then this change will auto-detect lack of convergence scenario and trigger a slow down of the workload by explicitly disallowing the VCPUs from spending much time in the VM context. The migration thread tries to catchup and this eventually leads to convergence in some "deterministic" amount of time. Yes it does impact the performance of all the VCPUs but in my observation that lasts only for a short duration of time. i.e. end up entering stage 3 (downtime phase) soon after that. No external trigger is required. Thanks to Juan and Paolo for their useful suggestions. --- Changes from v4: - incorporated feedback from Paolo. - split into 3 patches. Changes from v3: - incorporated feedback from Paolo and Eric - rebased to latest qemu.git Changes from v2: - incorporated feedback from Orit, Juan and Eric - stop the throttling thread at the start of stage 3 - rebased to latest qemu.git Changes from v1: - rebased to latest qemu.git - added auto-converge capability(default off) - suggested by Anthony Liguori & Eric Blake. Signed-off-by: Chegu Vinod <chegu_vi...@hp.com> Chegu Vinod (3): Introduce async_run_on_cpu() Add 'auto-converge' migration capability Force auto-convegence of live migration arch_init.c | 68 +++++++++++++++++++++++++++++++++++++++++ cpus.c | 29 +++++++++++++++++ include/migration/migration.h | 6 +++ include/qemu-common.h | 1 + include/qom/cpu.h | 10 ++++++ migration.c | 10 ++++++ qapi-schema.json | 5 ++- 7 files changed, 128 insertions(+), 1 deletions(-)