On 04/01, Paul E. McKenney wrote: > > If Will agrees, like the following?
Looks good to me, thanks ;) > documentation: memory-barriers: Fix smp_mb__before_spinlock() semantics > > Our current documentation claims that, when followed by an ACQUIRE, > smp_mb__before_spinlock() orders prior loads against subsequent loads > and stores, which isn't the intent. This commit therefore fixes the > documentation to state that this sequence orders only prior stores > against subsequent loads and stores. > > In addition, the original intent of smp_mb__before_spinlock() was to only > order prior loads against subsequent stores, however, people have started > using it as if it ordered prior loads against subsequent loads and stores. > This commit therefore also updates smp_mb__before_spinlock()'s header > comment to reflect this new reality. > > Cc: Oleg Nesterov <o...@redhat.com> > Cc: "Paul E. McKenney" <paul...@linux.vnet.ibm.com> > Cc: Peter Zijlstra <pet...@infradead.org> > Signed-off-by: Will Deacon <will.dea...@arm.com> > Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com> > > diff --git a/Documentation/memory-barriers.txt > b/Documentation/memory-barriers.txt > index 6974f1c2b4e1..52c320e3f107 100644 > --- a/Documentation/memory-barriers.txt > +++ b/Documentation/memory-barriers.txt > @@ -1784,10 +1784,9 @@ for each construct. These operations all imply > certain barriers: > > Memory operations issued before the ACQUIRE may be completed after > the ACQUIRE operation has completed. An smp_mb__before_spinlock(), > - combined with a following ACQUIRE, orders prior loads against > - subsequent loads and stores and also orders prior stores against > - subsequent stores. Note that this is weaker than smp_mb()! The > - smp_mb__before_spinlock() primitive is free on many architectures. > + combined with a following ACQUIRE, orders prior stores against > + subsequent loads and stores. Note that this is weaker than smp_mb()! > + The smp_mb__before_spinlock() primitive is free on many architectures. > > (2) RELEASE operation implication: > > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h > index 3e18379dfa6f..0063b24b4f36 100644 > --- a/include/linux/spinlock.h > +++ b/include/linux/spinlock.h > @@ -120,7 +120,7 @@ do { > \ > /* > * Despite its name it doesn't necessarily has to be a full barrier. > * It should only guarantee that a STORE before the critical section > - * can not be reordered with a LOAD inside this section. > + * can not be reordered with LOADs and STOREs inside this section. > * spin_lock() is the one-way barrier, this LOAD can not escape out > * of the region. So the default implementation simply ensures that > * a STORE can not move into the critical section, smp_wmb() should > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/