On Mon, 21 Sep 2015, Davidlohr Bueso wrote: > As such, weakly ordered archs can benefit from more relaxed use > of barriers when locking/unlocking. > > Signed-off-by: Davidlohr Bueso <dbu...@suse.de> > --- > kernel/locking/rtmutex.c | 30 +++++++++++++++++++++--------- > 1 file changed, 21 insertions(+), 9 deletions(-) > > diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c > index 7781d80..226a629 100644 > --- a/kernel/locking/rtmutex.c > +++ b/kernel/locking/rtmutex.c > @@ -74,14 +74,23 @@ static void fixup_rt_mutex_waiters(struct rt_mutex *lock) > * set up. > */ > #ifndef CONFIG_DEBUG_RT_MUTEXES > -# define rt_mutex_cmpxchg(l,c,n) (cmpxchg(&l->owner, c, n) == c) > +# define rt_mutex_cmpxchg_relaxed(l,c,n) (cmpxchg_relaxed(&l->owner, c, n) > == c) > +# define rt_mutex_cmpxchg_acquire(l,c,n) (cmpxchg_acquire(&l->owner, c, n) > == c) > +# define rt_mutex_cmpxchg_release(l,c,n) (cmpxchg_release(&l->owner, c, n) > == c) > + > +/* > + * Callers must hold the ->wait_lock -- which is the whole purpose as we > force > + * all future threads that attempt to [Rmw] the lock to the slowpath. As such > + * relaxed semantics suffice. > + */ > static inline void mark_rt_mutex_waiters(struct rt_mutex *lock) > { > unsigned long owner, *p = (unsigned long *) &lock->owner; > > do { > owner = *p; > - } while (cmpxchg(p, owner, owner | RT_MUTEX_HAS_WAITERS) != owner); > + } while (cmpxchg_relaxed(p, owner, > + owner | RT_MUTEX_HAS_WAITERS) != owner); > } > > /* > @@ -121,11 +130,14 @@ static inline bool unlock_rt_mutex_safe(struct rt_mutex > *lock) > * lock(wait_lock); > * acquire(lock); > */ > - return rt_mutex_cmpxchg(lock, owner, NULL); > + return rt_mutex_cmpxchg_acquire(lock, owner, NULL);
Why is this acquire? Thanks, tglx -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/