On Mon, Feb 21, 2011 at 09:51:22AM +0100, Paolo Bonzini wrote: > This series redoes the way time spent waiting for I/O is accounted to > the vm_clock. > > The current code is advancing qemu_icount before waiting for I/O. > Instead, after the patch qemu_icount is left aside (it is a pure > instruction counter) and qemu_icount_bias is changed according to > the actual amount of time spent in the wait. This is more > accurate, and actually works in the iothread case as well. > > (I started this as an experiment while trying to understand what was > going on. But it fixes the bug and does not break the no-iothread > case, so hey...). > > Patch 1 is a cleanup to Edgar's commit 225d02c (Avoid deadlock whith > iothread and icount, 2011-01-23).
Thanks, this one was a good cleanup, I've applied it. > Patch 2 fixes another misunderstanding in the role of qemu_next_deadline. > > Patches 3 and 4 implement the actual new accounting algorithm. Sorry, I don't know the code well enough to give any sensible feedback on patch 2 - 4. I did test them with some of my guests and things seem to be OK with them but quite a bit slower. I saw around 10 - 20% slowdown with a cris guest and -icount 10. The slow down might be related to the issue with super slow icount together with iothread (adressed by Marcelos iothread timeout patch). > With these patches, iothread "-icount N" doesn't work when the actual > execution speed cannot keep up with the requested speed; the execution > in that case is not deterministic. It works when the requested speed > is slow enough. Sorry, would you mind explaning this a bit? For example, if I have a machine and guest sw that does no IO. It runs the CPU and only uses devices that use the virtual time (e.g timers and peripherals that compute stuff). Can I expect the guest (with fixed icount speed "-icount N") to run deterministically regardless of host speed? Cheers