> From: Yoshiaki Tamura [mailto:tamura.yoshi...@gmail.com] > I think it's OK to have this feature by checking max_downtime == > 0. But I'm wondering that if users type commands like: > > migrate_set_downtime 0 > migrate <url> # w/o -d > > it'll lock the monitor forever in most cases. So forcing users to > set -d or automatically doing inside in case of max_downtime == 0 > seems better to me. Sorry if I'm missing the point... > > Yoshi
The suggested warmup phase implementation (by never converging migration when max_downtime == 0) should not be considered as a special case (which requires implicate set -d or the likes) as we currently anyway land up with monitor lockup for any attempt of a none demonized migration of a vm with high memory write rate on using small enough max_downtime (such as 0). Aidan > -----Original Message----- > From: Yoshiaki Tamura [mailto:tamura.yoshi...@gmail.com] > Sent: Friday, May 13, 2011 5:55 AM > To: Isaku Yamahata > Cc: Juan Quintela; Shribman, Aidan; Stefan Hajnoczi; > qemu-devel@nongnu.org; ohmura....@lab.ntt.co.jp > Subject: Re: [Qemu-devel] [PATCH] Add warmup phase for live > migration of large memory apps > > 2011/5/12 Isaku Yamahata <yamah...@valinux.co.jp>: > > On Thu, May 12, 2011 at 12:39:22PM +0200, Juan Quintela wrote: > >> "Shribman, Aidan" <aidan.shrib...@sap.com> wrote: > >> >> On Wed, May 11, 2011 at 8:58 AM, Shribman, Aidan > >> >> <aidan.shrib...@sap.com> wrote: > >> >> > From: Aidan Shribman <aidan.shrib...@sap.com> > >> >> > > >> >> > [PATCH] Add warmup phase for live migration of large > memory apps > >> >> > > >> >> > By invoking "migrate -w <url>" we initiate a background > >> >> live-migration > >> >> > transferring of dirty pages continuously until invocation > >> >> of "migrate_end" > >> >> > which attempts to complete the live migration operation. > >> >> > >> >> What is the purpose of this patch? How and when do I use it? > >> >> > >> > > >> > The warmup patch adds none-converging background update of guest > >> > memory during live-migration such that on request of > live-migration > >> > completion (via "migrate_end" command) we get much faster > >> > response. This is especially needed when running a > payload of large > >> > enterprise applications which have high memory demands. > >> > >> We should integrate this with Kemari (Kemari is doing > something like > >> this, just that it has more requirements). Isaku, do you > have any comments? > > > > Yochi and Kei are familiar with Kemari. Not me. Cced to them. > > I think it's OK to have this feature by checking max_downtime == > 0. But I'm wondering that if users type commands like: > > migrate_set_downtime 0 > migrate <url> # w/o -d > > it'll lock the monitor forever in most cases. So forcing users to > set -d or automatically doing inside in case of max_downtime == 0 > seems better to me. Sorry if I'm missing the point... > > Yoshi > > > > > > >> > >> BTW, what loads have you tested for this? > >> > >> if I setup an image with 1GB RAM and a DVD iso image, and do in the > >> guest: > >> > >> while true; do find /media/cdrom -type f | xargs md5sum; done > >> > >> Migration never converges with current code (if you use > more than 1GB > >> memory, then all the DVD will be cached inside). > >> > >> So, I see this only useful for guests that are almost > idle, and on that > >> case, migration speed is not the bigger of your problems, no? > >> > >> Later, Juan. > >> > > > > -- > > yamahata > > >