On Thu, Nov 17, 2016 at 11:29:59AM +0100, Peter Zijlstra wrote: > On Thu, Nov 17, 2016 at 05:48:51PM +0800, Boqun Feng wrote: > > > > But as I said, we actually only need the pairing of orderings: > > > > > > > > 1) load part of cmpxchg -> free() > > > > 2) object accesses -> store part of cmpxchg > > > > > > > > Ordering #1 can be achieved via control dependency as you pointed out > > > > that free()s very much includes stores. And ordering #2 can be achieved > > > > with RELEASE. > > > > > > > > So the code is right, I just thought the comment may be misleading. The > > > > reason we use cmpxchg_release() is just for achieving ordering #2, and > > > > not to order "prior loads and stores" with "a subsequent free". > > > > > > > > Am I missing some subtle orderings here? > > > > > > I would want to further quality 1), it must be no earlier than the load > > > of the last / successful ll/sc round. > > > > > > > Great, that's more accurate! > > > > > At that point we're guaranteed a reference count of 1 that _will_ drop > > > to 0, and thus nobody else (should) reference that memory anymore. > > > > > > If we agree on this, I'll update the comment :-) Will, do you too agree? > > > > Agreed ;-) > > > > Control dependencies and RELEASE are totally enough for the internal > > correctness of refcount_t along with its interactivity with free(). > > People better not reply order guarantees other than this ;-) > > Hurm.. let me ruin my own argument. > > Since the free() stores could leak upwards until that ll, and object > stores can be delayed until the sc, we still have a problem. Just not > with the thread that free()s or any other thread that knew about the > object. > > The problem comes from any other thread doing an allocation, since its > possible to observe the memory as freed while there are stores pending > to it, we can have those delayed stores trample on our freshly allocated > and initialized object. > > The stores must really not be before the SC, so I fear we must either > add an smp_wmb() after the release, or punt and use the fully ordered > cmpxchg().
And let me note here that RCU users can use a fully relaxed put, because call_rcu() guarantees a grace-period between the call_rcu and the free(), which in turn provides a full memory barrier that orders things. We could actually expose that to driver writers by doing something like: kref_put_rcu(struct kref *kref, struct rcu_head *head, rcu_callback_t func) { if (refcount_dec_and_test_relaxed(&kref->refcount)) call_rcu(head, func); } Do we want to go there?