* zhanghailiang (zhang.zhanghaili...@huawei.com) wrote: > Don't need to flush all VM's ram from cache, only > flush the dirty pages since last checkpoint > > Cc: Juan Quintela <quint...@redhat.com> > Signed-off-by: Li Zhijian <lizhij...@cn.fujitsu.com> > Signed-off-by: Zhang Chen <zhangchen.f...@cn.fujitsu.com> > Signed-off-by: zhanghailiang <zhang.zhanghaili...@huawei.com> > --- > migration/ram.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/migration/ram.c b/migration/ram.c > index 6227b94..e9ba740 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -2702,6 +2702,7 @@ int colo_init_ram_cache(void) > migration_bitmap_rcu = g_new0(struct BitmapRcu, 1); > migration_bitmap_rcu->bmap = bitmap_new(ram_cache_pages); > migration_dirty_pages = 0; > + memory_global_dirty_log_start();
Shouldn't there be a stop somewhere? (Probably if you failover to the secondary and colo stops?) > return 0; > > @@ -2750,6 +2751,15 @@ void colo_flush_ram_cache(void) > void *src_host; > ram_addr_t offset = 0; > > + memory_global_dirty_log_sync(); > + qemu_mutex_lock(&migration_bitmap_mutex); > + rcu_read_lock(); > + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { > + migration_bitmap_sync_range(block->offset, block->used_length); > + } > + rcu_read_unlock(); > + qemu_mutex_unlock(&migration_bitmap_mutex); Again this might have some fun merging with Juan's recent changes - what's really unusual about your set is that you're using this bitmap on the destination; I suspect Juan's recent changes that trickier. Check 'Creating RAMState for migration' and 'Split migration bitmaps by ramblock'. Dave > trace_colo_flush_ram_cache_begin(migration_dirty_pages); > rcu_read_lock(); > block = QLIST_FIRST_RCU(&ram_list.blocks); > -- > 1.8.3.1 > > > -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK