On Mon, May 09, 2016 at 12:16:37PM -0700, Jason Low wrote: > When acquiring the rwsem write lock in the slowpath, we first try > to set count to RWSEM_WAITING_BIAS. When that is successful, > we then atomically add the RWSEM_WAITING_BIAS in cases where > there are other tasks on the wait list. This causes write lock > operations to often issue multiple atomic operations. > > We can instead make the list_is_singular() check first, and then > set the count accordingly, so that we issue at most 1 atomic > operation when acquiring the write lock and reduce unnecessary > cacheline contention. > > Signed-off-by: Jason Low <jason.l...@hp.com> > --- > kernel/locking/rwsem-xadd.c | 20 +++++++++++++------- > 1 file changed, 13 insertions(+), 7 deletions(-) > > diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c > index df4dcb8..23c33e6 100644 > --- a/kernel/locking/rwsem-xadd.c > +++ b/kernel/locking/rwsem-xadd.c > @@ -258,14 +258,20 @@ EXPORT_SYMBOL(rwsem_down_read_failed); > static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem) > { > /* > + * Avoid trying to acquire write lock if count isn't RWSEM_WAITING_BIAS. > */ > + if (count != RWSEM_WAITING_BIAS) > + return false; > + > + /* > + * Acquire the lock by trying to set it to ACTIVE_WRITE_BIAS. If there > + * are other tasks on the wait list, we need to add on WAITING_BIAS. > + */ > + count = list_is_singular(&sem->wait_list) ? > + RWSEM_ACTIVE_WRITE_BIAS : > + RWSEM_ACTIVE_WRITE_BIAS + RWSEM_WAITING_BIAS; > + > + if (cmpxchg_acquire(&sem->count, RWSEM_WAITING_BIAS, count) == > RWSEM_WAITING_BIAS) { > rwsem_set_owner(sem); > return true; > }
Right; so that whole thing works because we're holding sem->wait_lock. Should we clarify that someplace? Also; should we not make rw_semaphore::count an atomic_long_t and kill rwsem_atomic_{update,add}() ?