On Tue, Dec 01, 2009 at 10:51:30AM -0200, Glauber Costa wrote:
> This function is similar to qemu-kvm's on_vcpu mechanism. Totally synchronous,
> and guarantees that a given function will be executed at the specified vcpu.
> 
> This patch also convert usage within the breakpoints system
> 
> Signed-off-by: Glauber Costa <glom...@redhat.com>
> ---

> @@ -3436,8 +3441,7 @@ static int tcg_has_work(void);
>  
>  static pthread_key_t current_env;
>  
> -CPUState *qemu_get_current_env(void);
> -CPUState *qemu_get_current_env(void)
> +static CPUState *qemu_get_current_env(void)
>  {
>      return pthread_getspecific(current_env);
>  }
> @@ -3474,8 +3478,10 @@ static int qemu_init_main_loop(void)
>  
>  static void qemu_wait_io_event(CPUState *env)
>  {
> -    while (!tcg_has_work())
> +    while (!tcg_has_work()) {

This checks all cpus, while for KVM it should check only 
the current cpu.

> +        qemu_flush_work(env);
>          qemu_cond_timedwait(env->halt_cond, &qemu_global_mutex, 1000);
> +    }

KVM vcpu threads should block SIGUSR1, set the in-kernel signal mask 
with KVM_SET_SIGNAL_MASK ioctl, and eat the signal in
qemu_wait_io_event (qemu_flush_work should run after eating
the signal). Similarly to qemu-kvm's kvm_main_loop_wait.

Otherwise a vcpu thread can lose a signal (say handle SIGUSR1 when not
holding qemu_global_mutex before kernel entry).

I think this the source of the problems patch 8 attempts to fix.




Reply via email to