On Mon, Feb 07, 2011 at 02:03:50PM -0200, Marcelo Tosatti wrote: > On Mon, Feb 07, 2011 at 08:12:55AM -0200, Marcelo Tosatti wrote: > > > > One more thing I didn't mention on the email-thread or on IRC is > > > > that last time I checked, qemu with io-thread was performing > > > > significantly slower than non io-thread builds. That was with > > > > TCG emulation (not kvm). Somewhere between 5 - 10% slower, IIRC. > > > > Can you recall what was the test ? > > > > > > Also, although -icount & iothread no longer deadlocks, icount > > > > still sometimes performs incredibly slow with the io-thread (compared > > > > to non-io-thread qemu). In particular when not using -icount auto but > > > > a fixed ticks per insn values. Sometimes it's so slow I thought it > > > > actually deadlocked, but no it was crawling :) I haven't had time > > > > to look at it any closer but I hope to do soon. > > Edgar, please give the attached patch a try with fixed icount value. The > calculation for next event makes no sense for iothread timeout, only for > vcpu context.
Thanks Marcelo, this patch fixes the problems I was seeing here. Cheers > diff --git a/cpus.c b/cpus.c > index 9c50a34..2280db1 100644 > --- a/cpus.c > +++ b/cpus.c > @@ -748,7 +748,7 @@ static void qemu_tcg_wait_io_event(void) > CPUState *env; > > while (!any_cpu_has_work()) > - qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000); > + qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, > qemu_calculate_timeout()); > > qemu_mutex_unlock(&qemu_global_mutex); > > diff --git a/vl.c b/vl.c > index 837be97..dbd81a1 100644 > --- a/vl.c > +++ b/vl.c > @@ -1323,7 +1323,7 @@ void main_loop_wait(int nonblocking) > if (nonblocking) > timeout = 0; > else { > - timeout = qemu_calculate_timeout(); > + timeout = 1000; > qemu_bh_update_timeout(&timeout); > } >