> -----Original Message-----
> From: Wu, Feng
> Sent: Monday, May 23, 2016 5:18 PM
> To: Jan Beulich <jbeul...@suse.com>
> Cc: andrew.coop...@citrix.com; dario.faggi...@citrix.com;
> george.dun...@eu.citrix.com; Tian, Kevin <kevin.t...@intel.com>; xen-
> de...@lists.xen.org; konrad.w...@oracle.com; k...@xen.org; Wu, Feng
> <feng...@intel.com>
> Subject: RE: [PATCH 3/3] VMX: Remove the vcpu from the per-cpu blocking list
> after domain termination
> 
> 
> 
> > -----Original Message-----
> > From: Jan Beulich [mailto:jbeul...@suse.com]
> > Sent: Monday, May 23, 2016 5:08 PM
> > To: Wu, Feng <feng...@intel.com>
> > Cc: andrew.coop...@citrix.com; dario.faggi...@citrix.com;
> > george.dun...@eu.citrix.com; Tian, Kevin <kevin.t...@intel.com>; xen-
> > de...@lists.xen.org; konrad.w...@oracle.com; k...@xen.org
> > Subject: RE: [PATCH 3/3] VMX: Remove the vcpu from the per-cpu blocking list
> > after domain termination
> >
> > >>> On 23.05.16 at 07:48, <feng...@intel.com> wrote:
> > >> From: Tian, Kevin
> > >> Sent: Monday, May 23, 2016 1:19 PM
> > >> > From: Wu, Feng
> > >> > Sent: Friday, May 20, 2016 4:54 PM
> > >> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > >> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > >> > @@ -248,6 +248,36 @@ void vmx_pi_hooks_deassign(struct domain *d)
> > >> >      d->arch.hvm_domain.vmx.pi_switch_to = NULL;
> > >> >  }
> > >> >
> > >> > +static void vmx_pi_blocking_list_cleanup(struct domain *d)
> > >>
> > >> Is it more natural to move such cleanup under vcpu destroy?
> > >
> > > Yes, indeed it is more natural to add this function when vcpu is 
> > > destroyed,
> > > however, the reason I add it here is I still have some concerns about the
> > > timing.
> > > When the VM is destroyed, here is the calling path:
> > >
> > > - vmx_pi_hooks_deassign() -->
> > > ......
> > > - vmx_vcpu_destroy() -->
> > > ......
> > > - vmx_domain_destroy()
> > > ......
> > >
> > > As I replied in the previous mail, when we remove the vcpus from the
> > > blocking
> > > list, there might be some _in-flight_ call to the hooks, so I put the 
> > > cleanup
> > > code in the vmx_domain_destroy(), which is a bit more far from
> > > vmx_pi_hooks_deassign,
> > > and hence safer. If you have any other good ideas, I am all ears!:)
> >
> > Well, either there is a possible race (then moving the addition
> > later just reduces the chances, but doesn't eliminate it), or there
> > isn't (in which case Kevin's suggestion should probably be followed).
> 
> Yes, I agree, adding the cleanup code in domain destroy other than
> vcpu destroy point just reduces the risk, but not eliminate. So far I don't
> get a perfect solution to solve this possible race condition.

After more thinking about this, I think this race condition can be resolve
in the following way:
1. Define a per-vCPU flag, say, 'v->arch.hvm_vmx.pi_back_from_hotplug'
2. In vmx_pi_blocking_list_cleanup(), when we find the vCPU from an
blocking list, after removing it, set the flag to 1
3. In vmx_vcpu_block(), add the following check:

     spin_lock_irqsave(pi_blocking_list_lock, flags);
+    if ( unlikely(v->arch.hvm_vmx.pi_blocking_cleaned_up == 1) )
+    {
+        /*
+         * The vcpu is to be destroyed and it has already been removed
+         * from the per-CPU list if it is blocking, we shouldn't add
+         * new vCPUs to the list.
+         */
+        spin_unlock_irqrestore(pi_blocking_list_lock, flags);
+        return;
+    }
+
     old_lock = cmpxchg(&v->arch.hvm_vmx.pi_blocking.lock, NULL,
                        pi_blocking_list_lock);

Then we can following Kevin's suggestion to move the addition
to vmx_vcpu_destory().

Any ideas?

Thanks,
Feng


> 
> Thanks,
> Feng
> 
> >
> > Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to