On Mon, Jan 28, 2019 at 02:09:37PM +0200, Elena Reshetova wrote: > This adds an smp_acquire__after_ctrl_dep() barrier on successful > decrease of refcounter value from 1 to 0 for refcount_dec(sub)_and_test > variants and therefore gives stronger memory ordering guarantees than > prior versions of these functions. > > Co-Developed-by: Peter Zijlstra (Intel) <pet...@infradead.org> > Signed-off-by: Elena Reshetova <elena.reshet...@intel.com>
+ Alan, Dmitry; they might also deserve a Suggested-by: ;-) [...] > +An ACQUIRE memory ordering guarantees that all post loads and > +stores (all po-later instructions) on the same CPU are > +completed after the acquire operation. It also guarantees that all > +po-later stores on the same CPU and all propagated stores from other CPUs > +must propagate to all other CPUs after the acquire operation > +(A-cumulative property). Mmh, this property (A-cumulativity) isn't really associated to ACQUIREs in the LKMM; I'd suggest to simply remove the last sentence. [...] > diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h > index dbaed55..ab8f584 100644 > --- a/arch/x86/include/asm/refcount.h > +++ b/arch/x86/include/asm/refcount.h > @@ -67,16 +67,29 @@ static __always_inline void refcount_dec(refcount_t *r) > static __always_inline __must_check > bool refcount_sub_and_test(unsigned int i, refcount_t *r) > { > - return GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl", > + bool ret = GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl", > REFCOUNT_CHECK_LT_ZERO, > r->refs.counter, e, "er", i, "cx"); > + > + if (ret) { > + smp_acquire__after_ctrl_dep(); > + return true; > + } > + > + return false; There appears to be some white-space damage (here and in other places); checkpatch.pl should point these and other style problems out. Andrea > } > > static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r) > { > - return GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", > - REFCOUNT_CHECK_LT_ZERO, > - r->refs.counter, e, "cx"); > + bool ret = GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", > + REFCOUNT_CHECK_LT_ZERO, > + r->refs.counter, e, "cx"); > + if (ret) { > + smp_acquire__after_ctrl_dep(); > + return true; > + } > + > + return false; > } > > static __always_inline __must_check > diff --git a/lib/refcount.c b/lib/refcount.c > index ebcf8cd..732feac 100644 > --- a/lib/refcount.c > +++ b/lib/refcount.c > @@ -33,6 +33,9 @@ > * Note that the allocator is responsible for ordering things between free() > * and alloc(). > * > + * The decrements dec_and_test() and sub_and_test() also provide acquire > + * ordering on success. > + * > */ > > #include <linux/mutex.h> > @@ -164,8 +167,7 @@ EXPORT_SYMBOL(refcount_inc_checked); > * at UINT_MAX. > * > * Provides release memory ordering, such that prior loads and stores are > done > - * before, and provides a control dependency such that free() must come > after. > - * See the comment on top. > + * before, and provides an acquire ordering on success such that free() must > come after. > * > * Use of this function is not recommended for the normal reference counting > * use case in which references are taken and released one at a time. In > these > @@ -190,7 +192,12 @@ bool refcount_sub_and_test_checked(unsigned int i, > refcount_t *r) > > } while (!atomic_try_cmpxchg_release(&r->refs, &val, new)); > > - return !new; > + if (!new) { > + smp_acquire__after_ctrl_dep(); > + return true; > + } > + return false; > + > } > EXPORT_SYMBOL(refcount_sub_and_test_checked); > > @@ -202,8 +209,7 @@ EXPORT_SYMBOL(refcount_sub_and_test_checked); > * decrement when saturated at UINT_MAX. > * > * Provides release memory ordering, such that prior loads and stores are > done > - * before, and provides a control dependency such that free() must come > after. > - * See the comment on top. > + * before, and provides an acquire ordering on success such that free() must > come after. > * > * Return: true if the resulting refcount is 0, false otherwise > */ > -- > 2.7.4 >