Forwarding to the alias.
Thanks,
Vinod

-------- Original Message --------
Subject:        Re: [RFC 0/7] Migration stats
Date:   Mon, 13 Aug 2012 15:20:10 +0200
From:   Juan Quintela <quint...@redhat.com>
Reply-To:       <quint...@redhat.com>
To:     Chegu Vinod <chegu_vi...@hp.com>
CC:     


[ snip ]

>> - Prints the real downtime that we have had


   really, it prints the total downtime of the complete phase, but the
   downtime also includes the last ram_iterate phase.  Working on
   fixing that one.

Good one.


[...]

What do I want to know:

- is there any stat that you want?  Once here, adding a new one should
   be easy.



A few suggestions :

a) Total amount of time spent sync'ng up dirty bitmap logs for the
total duration of migration.

I can add that one, it is not difficult.  Notice that in future I expect
to do the syncs in smaller chucks (but that is pie on the sky)

b) Actual [average?] bandwidth that was used as compared to the
allocated bandwidth ...  (I am wanting to know how folks are observing
near line rate on a 10Gig...when I am not...).

Print average bandwidth is easy.  The "hardware one" is difficult to get
from inside one application.




'think it would be useful to know the approximate amount of [host] cpu
time that got used up by the migration related thread(s) and any
related host side services (like servicing the I/O interrupts while
driving traffic through the network). I assume there are alternate
methods to derive all these (and we don't need to overload the
migration stats?]

This one is not easy to do from inside qemu.  Much easier to get from
the outside.  As far as I know, it is not easy to monitor cpu usage from
inside the cpu that we can to measure.

Thanks for the comments, Juan.
.




Reply via email to