>>> On 16.08.17 at 07:14, <chao....@intel.com> wrote: > @@ -100,6 +101,24 @@ void vmx_pi_per_cpu_init(unsigned int cpu) > spin_lock_init(&per_cpu(vmx_pi_blocking, cpu).lock); > } > > +static void vmx_pi_add_vcpu(struct pi_blocking_vcpu *pbv, > + struct vmx_pi_blocking_vcpu *vpbv) > +{ > + ASSERT(spin_is_locked(&vpbv->lock));
You realize this is only a very weak check for a non-recursive lock? > + add_sized(&vpbv->counter, 1); > + ASSERT(read_atomic(&vpbv->counter)); Why add_sized() and read_atomic() when you hold the lock? Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel