>>> On 23.05.16 at 14:24, <feng...@intel.com> wrote:

> 
>> -----Original Message-----
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: Monday, May 23, 2016 7:11 PM
>> To: Wu, Feng <feng...@intel.com>
>> Cc: andrew.coop...@citrix.com; dario.faggi...@citrix.com;
>> george.dun...@eu.citrix.com; Tian, Kevin <kevin.t...@intel.com>; xen-
>> de...@lists.xen.org; konrad.w...@oracle.com; k...@xen.org 
>> Subject: RE: [PATCH 3/3] VMX: Remove the vcpu from the per-cpu blocking list
>> after domain termination
>> 
>> >>> On 23.05.16 at 12:35, <feng...@intel.com> wrote:
>> >> From: Wu, Feng
>> >> Sent: Monday, May 23, 2016 5:18 PM
>> >> > From: Jan Beulich [mailto:jbeul...@suse.com]
>> >> > Sent: Monday, May 23, 2016 5:08 PM
>> >> > To: Wu, Feng <feng...@intel.com>
>> >> > >>> On 23.05.16 at 07:48, <feng...@intel.com> wrote:
>> >> > > Yes, indeed it is more natural to add this function when vcpu is 
> destroyed,
>> >> > > however, the reason I add it here is I still have some concerns about 
>> >> > > the
>> >> > > timing.
>> >> > > When the VM is destroyed, here is the calling path:
>> >> > >
>> >> > > - vmx_pi_hooks_deassign() -->
>> >> > > ......
>> >> > > - vmx_vcpu_destroy() -->
>> >> > > ......
>> >> > > - vmx_domain_destroy()
>> >> > > ......
>> >> > >
>> >> > > As I replied in the previous mail, when we remove the vcpus from the
>> >> > > blocking
>> >> > > list, there might be some _in-flight_ call to the hooks, so I put the 
> cleanup
>> >> > > code in the vmx_domain_destroy(), which is a bit more far from
>> >> > > vmx_pi_hooks_deassign,
>> >> > > and hence safer. If you have any other good ideas, I am all ears!:)
>> >> >
>> >> > Well, either there is a possible race (then moving the addition
>> >> > later just reduces the chances, but doesn't eliminate it), or there
>> >> > isn't (in which case Kevin's suggestion should probably be followed).
>> >>
>> >> Yes, I agree, adding the cleanup code in domain destroy other than
>> >> vcpu destroy point just reduces the risk, but not eliminate. So far I 
>> >> don't
>> >> get a perfect solution to solve this possible race condition.
>> >
>> > After more thinking about this, I think this race condition can be resolve
>> > in the following way:
>> > 1. Define a per-vCPU flag, say, 'v->arch.hvm_vmx.pi_back_from_hotplug'
>> > 2. In vmx_pi_blocking_list_cleanup(), when we find the vCPU from an
>> > blocking list, after removing it, set the flag to 1
>> > 3. In vmx_vcpu_block(), add the following check:
>> >
>> >      spin_lock_irqsave(pi_blocking_list_lock, flags);
>> > +    if ( unlikely(v->arch.hvm_vmx.pi_blocking_cleaned_up == 1) )
>> > +    {
>> > +        /*
>> > +         * The vcpu is to be destroyed and it has already been removed
>> > +         * from the per-CPU list if it is blocking, we shouldn't add
>> > +         * new vCPUs to the list.
>> > +         */
>> > +        spin_unlock_irqrestore(pi_blocking_list_lock, flags);
>> > +        return;
>> > +    }
>> > +
>> >      old_lock = cmpxchg(&v->arch.hvm_vmx.pi_blocking.lock, NULL,
>> >                         pi_blocking_list_lock);
>> >
>> > Then we can following Kevin's suggestion to move the addition
>> > to vmx_vcpu_destory().
>> 
>> Before adding yet another PI-related field, I'd really like to see other
>> alternatives explored. In particular - can determination be based on
>> some other state (considering the subject, e.g. per-domain one)?
> 
> I think the point is we need to set some flag inside the
> spin_lock_irqsave()/spin_unlock_irqrestore() section in
> vmx_pi_blocking_list_cleanup() and check it after acquiring the lock
> in vmx_vcpu_block(), so the case condition can be eliminated, right?
> If that is the case, I am not sure how we can use other state.

Since you only need this during domain shutdown, I'm not sure. For
example, can't you simply use d->is_dying or d->is_shutting_down?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to