On Tue, May 03, 2016 at 04:16:43PM -0600, Chris Friesen wrote: > On 05/03/2016 03:14 AM, Daniel P. Berrange wrote: > > >There are currently many options for live migration with QEMU that can > >assist in completion > > <snip> > > >Given this I've spent the last week creating an automated test harness > >for QEMU upstream which triggers migration with an extreme guest CPU > >load and measures the performance impact of these features on the guest, > >and whether the migration actually completes. > > > >I hope to be able to publish the results of this investigation this week > >which should facilitate us in deciding which is best to use for OpenStack. > >The spoiler though is that all the options are pretty terrible, except for > >post-copy. > > Just to be clear, it's not really CPU load that's the issue though, right? > > Presumably it would be more accurate to say that the issue is the rate at > which unique memory pages are being dirtied and the total number of dirty > pages relative to your copy bandwidth. > > This probably doesn't change the results though...at a high enough dirty > rate you either pause the VM to keep it from dirtying more memory or you > post-copy migrate and dirty the memory on the destination.
Yes that's correct - I should have been more explicit. A high rate of dirtying memory implies high CPU load, but high CPU load does not imply high rate of dirtying memory. My stress test used for benchmarking is producing a high rate of dirtying memory. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev