On 12/06/2019 11:14, Jan Beulich wrote:
>>>> On 12.06.19 at 11:56, <dfaggi...@suse.com> wrote:
>> On Wed, 2019-06-12 at 10:32 +0100, Jan Beulich wrote:
>>>>>> On 12.06.19 at 10:19, <jgr...@suse.com> wrote:
>>>> On 12.06.19 10:05, Andrew Cooper wrote:
>>>>> On 28/05/2019 11:32, Juergen Gross wrote:
>>>>>> +    per_cpu(scheduler, cpu) = new_ops;
>>>>>> +    sd->sched_priv = ppriv;
>>>>>> +
>>>>>> +    /*
>>>>>> +     * (Re?)route the lock to the per pCPU lock as /last/
>>>>>> thing. In fact,
>>>>>> +     * if it is free (and it can be) we want that anyone that
>>>>>> manages
>>>>>> +     * taking it, finds all the initializations we've done
>>>>>> above in place.
>>>>>> +     */
>>>>>> +    smp_mb();
>>>>> A full memory barrier is a massive overhead for what should be
>>>>> smp_wmb().  The matching barrier is actually hidden in the
>>>>> implicit
>>>>> semantics of managing to lock sd->schedule_lock (which is trial
>>>>> an error
>>>>> anyway), but the only thing that matters here is that all other
>>>>> written
>>>>> data is in place first.
>>>>>
>>>> Not that it would really matter for performance (switching cpus
>>>> between
>>>> cpupools is a _very_ rare operation), I'm fine transforming the
>>>> barrier
>>>> into smp_wmb().
>>> This again is a change easy enough to make while committing. I'm
>>> recording the above in case I end up being the committer.
>>>
>> I'm fine (i.e., my Rev-by: remains valid) with this being turned into a
>> wmb(), and it being done upon commit (thanks Jan for the offer to do
>> that!).
>>
>> If we do it, however, I think the comment needs to be adjusted too, and
>> the commit message needs to briefly mention this new change we're
>> doing.
>>
>> Maybe, for the comment, add something like:
>>
>> "The smp_wmb() corresponds to the barrier implicit in successfully
>> taking the lock."
> I'm not entirely happy with this one: Taking a lock is an implicit full
> barrier, so cannot alone be used to reason why a write barrier is
> sufficient here.

It is a consequence of our extra magic scheduler locks which protect the
pointer used to locate them, and the ensuing double-step in ???
(seriously - I can't figure out the function names created by the
sched_lock() monstrosity) which take the lock, re-read the lock pointer
and drop on a mismatch.

I've gone with:

+    /*
+     * The data above is protected under new_lock, which may be unlocked.
+     * Another CPU can take new_lock as soon as sd->schedule_lock is
visible,
+     * and must observe all prior initialisation.
+     */
+    smp_wmb();

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to