Marcelo Tosatti wrote: > On Mon, Jun 21, 2010 at 10:58:32PM +0200, Jan Kiszka wrote: >> Jan Kiszka wrote: >>> Marcelo Tosatti wrote: >>>> Clear exit_request when iothread grabs the global lock. >>>> >>>> Signed-off-by: Marcelo Tosatti <mtosa...@redhat.com> >>>> >>>> diff --git a/cpu-exec.c b/cpu-exec.c >>>> index 026980a..74cb973 100644 >>>> --- a/cpu-exec.c >>>> +++ b/cpu-exec.c >>>> @@ -236,10 +236,8 @@ int cpu_exec(CPUState *env1) >>>> asm(""); >>>> env = env1; >>>> >>>> - if (exit_request) { >>>> + if (exit_request) >>>> env->exit_request = 1; >>>> - exit_request = 0; >>>> - } >>> Coding style... >>> >>>> >>>> #if defined(TARGET_I386) >>>> if (!kvm_enabled()) { >>>> diff --git a/cpus.c b/cpus.c >>>> index fcd0f09..ef1ab22 100644 >>>> --- a/cpus.c >>>> +++ b/cpus.c >>>> @@ -598,6 +598,7 @@ void qemu_mutex_lock_iothread(void) >>>> } >>>> qemu_mutex_unlock(&qemu_fair_mutex); >>>> } >>>> + exit_request = 0; >>>> } >>>> >>>> void qemu_mutex_unlock_iothread(void) >>>> >>>> >>> I looked into this a bit as well, and that's what I also have in my >>> queue. >>> >>> But things are still widely broken: pause_all_vcpus and run_on_cpu as >>> there is no guarantee that all VCPUs regularly call into >>> qemu_wait_io_event. Also breakpoints don't work, not only in SMP mode. > > This fixes pause for me: >
Partially. It caused regressions on the SMP scheduling without the early loop exit in my patch. I will break up my changes later today and post them as series. > > diff --git a/cpu-exec.c b/cpu-exec.c > index c776605..0149da5 100644 > --- a/cpu-exec.c > +++ b/cpu-exec.c > @@ -238,7 +238,6 @@ int cpu_exec(CPUState *env1) > > if (exit_request) { > env->exit_request = 1; > - exit_request = 0; > } > > #if defined(TARGET_I386) > diff --git a/cpus.c b/cpus.c > index 826886c..14f7cfc 100644 > --- a/cpus.c > +++ b/cpus.c > @@ -403,6 +403,8 @@ static void qemu_wait_io_event_common(CPUState *env) > > static void qemu_wait_io_event(CPUState *env) > { > + CPUState *e; > + > while (!tcg_has_work()) > qemu_cond_timedwait(env->halt_cond, &qemu_global_mutex, 1000); > > @@ -417,7 +419,9 @@ static void qemu_wait_io_event(CPUState *env) > qemu_mutex_unlock(&qemu_fair_mutex); > > qemu_mutex_lock(&qemu_global_mutex); > - qemu_wait_io_event_common(env); > + > + for (e = first_cpu; e != NULL; e = e->next_cpu) > + qemu_wait_io_event_common(e); > } > > static void qemu_kvm_eat_signal(CPUState *env, int timeout) > @@ -614,6 +618,7 @@ void qemu_mutex_lock_iothread(void) > } > qemu_mutex_unlock(&qemu_fair_mutex); > } > + exit_request = 0; > } > > void qemu_mutex_unlock_iothread(void) > > > > Perhaps there is a similar problem with debugging (round robin > in tcg_cpu_exec fails when there is a timer pending, and the > iothread is not processing pending timers). > Frankly, I still can't explain the round-robin logic. What complicates the situation is that it currently has to work in both threading modes. I really think we need a proper time-slicing model, maybe with early yield if the guest runs on some pause instruction, ie. spins on a lock. Jan
signature.asc
Description: OpenPGP digital signature