On Thu, Jul 02, 2020 at 02:42:45PM +0200, Paolo Bonzini wrote: > On 02/07/20 12:57, Roman Bolshakov wrote: > > There's still a small chance of kick loss, on user-to-kernel border > > between atomic_mb_set's just before the entry to hv_vcpu_run and just > > after it. > > Good point, but we can fix it. > > > -static void dummy_signal(int sig) > > +static void hvf_handle_ipi(int sig) > > { > > + CPUState *cpu = pthread_getspecific(hvf_cpu); > > You can use current_cpu here. If it's NULL, just return (it's a > per-thread variable). > > > + X86CPU *x86_cpu = X86_CPU(cpu); > > + CPUX86State *env = &x86_cpu->env; > > + > > + if (!atomic_xchg(&env->hvf_in_guest, false)) { > > Here, thinking more about it, we need not write hvf_in_guest, so: > > /* Write cpu->exit_request before reading env->hvf_in_guest. */ > smp_mb(); > if (!atomic_read(&env->hvf_in_guest)) { > ... > } > > > + wvmcs(cpu->hvf_fd, VMCS_PIN_BASED_CTLS, > > + rvmcs(cpu->hvf_fd, VMCS_PIN_BASED_CTLS) > > + | VMCS_PIN_BASED_CTLS_VMX_PREEMPT_TIMER); > > + } > > } > > > > int hvf_init_vcpu(CPUState *cpu) > > @@ -631,7 +650,9 @@ int hvf_vcpu_exec(CPUState *cpu) > > return EXCP_HLT; > > } > > > > + atomic_mb_set(&env->hvf_in_guest, true); > > hv_return_t r = hv_vcpu_run(cpu->hvf_fd); > > + atomic_mb_set(&env->hvf_in_guest, false); > > > And here you can do instead: > > atomic_set(&env->hvf_in_guest, true); > /* Read cpu->exit_request after writing env->hvf_in_guest. */ > smp_mb(); > if (atomic_read(&cpu->exit_request)) { > qemu_mutex_lock_iothread(); > atomic_set(&env->hvf_in_guest, false); > return EXCP_INTERRUPT; > } > hv_return_t r = hv_vcpu_run(cpu->hvf_fd); > atomic_store_release(&env->hvf_in_guest, false); > > This matching "write A/smp_mb()/read B" and "write B/smp_mb()/read A" is > a very common idiom for lock-free signaling between threads. >
Hi Paolo, Thanks for the feedback and the guidelines. I think I've got the idea: exit_request is the way to record the fact of kick request even if it was sent outside of hv_vcpu_run(). Best regards, Roman