Hi, Derek and Chen ram_bulk_stage is false by default before Hailiang's patch. For COLO, it does not seem to be used, so I think there is no need to reset it to true.
Thanks, Lei. From: Derek Su <dere...@qnap.com> Sent: Tuesday, September 22, 2020 11:48 AM To: Zhang, Chen <chen.zh...@intel.com> Cc: qemu-devel <qemu-devel@nongnu.org>; Rao, Lei <lei....@intel.com>; zhang.zhanghaili...@huawei.com; quint...@redhat.com; dgilb...@redhat.com Subject: Re: [PATCH v1 1/1] COLO: only flush dirty ram pages from colo cache Hi, Chen Sure. BTW, I just went through Lei's patch. ram_bulk_stage() might need to reset to true after stopping COLO service as my patch. How about your opinion? Thanks. Best regards, Derek Zhang, Chen <mailto:chen.zh...@intel.com> 於 2020年9月22日 週二 上午11:41寫道: Hi Derek and Lei, It looks same with Lei’ patch: [PATCH 2/3] Reduce the time of checkpoint for COLO Can you discuss to merge it into one patch? Thanks Zhang Chen From: Derek Su <mailto:dere...@qnap.com> Sent: Tuesday, September 22, 2020 11:31 AM To: qemu-devel <mailto:qemu-devel@nongnu.org> Cc: mailto:zhang.zhanghaili...@huawei.com; mailto:quint...@redhat.com; mailto:dgilb...@redhat.com; Zhang, Chen <mailto:chen.zh...@intel.com> Subject: Re: [PATCH v1 1/1] COLO: only flush dirty ram pages from colo cache Hello, all Ping... Regards, Derek Su Derek Su <mailto:dere...@qnap.com> 於 2020年9月10日 週四 下午6:47寫道: In secondary side, the colo_flush_ram_cache() calls migration_bitmap_find_dirty() to finding the dirty pages and flush them to host. But ram_state's ram_bulk_stage flag is always enabled in secondary side, it leads to the whole ram pages copy instead of only dirty pages. Here, the ram_bulk_stage in secondary side is disabled in the preparation of COLO incoming process to avoid the whole dirty ram pages flush. Signed-off-by: Derek Su <mailto:dere...@qnap.com> --- migration/colo.c | 6 +++++- migration/ram.c | 10 ++++++++++ migration/ram.h | 3 +++ 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/migration/colo.c b/migration/colo.c index ea7d1e9d4e..6e644db306 100644 --- a/migration/colo.c +++ b/migration/colo.c @@ -844,6 +844,8 @@ void *colo_process_incoming_thread(void *opaque) return NULL; } + colo_disable_ram_bulk_stage(); + failover_init_state(); mis->to_src_file = qemu_file_get_return_path(mis->from_src_file); @@ -873,7 +875,7 @@ void *colo_process_incoming_thread(void *opaque) goto out; } #else - abort(); + abort(); #endif vm_start(); trace_colo_vm_state_change("stop", "run"); @@ -924,6 +926,8 @@ out: qemu_fclose(fb); } + colo_enable_ram_bulk_stage(); + /* Hope this not to be too long to loop here */ qemu_sem_wait(&mis->colo_incoming_sem); qemu_sem_destroy(&mis->colo_incoming_sem); diff --git a/migration/ram.c b/migration/ram.c index 76d4fee5d5..65e9b12058 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3357,6 +3357,16 @@ static bool postcopy_is_running(void) return ps >= POSTCOPY_INCOMING_LISTENING && ps < POSTCOPY_INCOMING_END; } +void colo_enable_ram_bulk_stage(void) +{ + ram_state->ram_bulk_stage = true; +} + +void colo_disable_ram_bulk_stage(void) +{ + ram_state->ram_bulk_stage = false; +} + /* * Flush content of RAM cache into SVM's memory. * Only flush the pages that be dirtied by PVM or SVM or both. diff --git a/migration/ram.h b/migration/ram.h index 2eeaacfa13..c1c0ebbe0f 100644 --- a/migration/ram.h +++ b/migration/ram.h @@ -69,4 +69,7 @@ void colo_flush_ram_cache(void); void colo_release_ram_cache(void); void colo_incoming_start_dirty_log(void); +void colo_enable_ram_bulk_stage(void); +void colo_disable_ram_bulk_stage(void); + #endif -- 2.25.1