Commit-ID:  c0fcb6c2d332041256dc55d8a1ec3c0a2d0befb8
Gitweb:     http://git.kernel.org/tip/c0fcb6c2d332041256dc55d8a1ec3c0a2d0befb8
Author:     Jason Low <jason.l...@hpe.com>
AuthorDate: Mon, 16 May 2016 17:38:00 -0700
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 3 Jun 2016 09:47:13 +0200

locking/rwsem: Optimize write lock by reducing operations in slowpath

When acquiring the rwsem write lock in the slowpath, we first try
to set count to RWSEM_WAITING_BIAS. When that is successful,
we then atomically add the RWSEM_WAITING_BIAS in cases where
there are other tasks on the wait list. This causes write lock
operations to often issue multiple atomic operations.

We can instead make the list_is_singular() check first, and then
set the count accordingly, so that we issue at most 1 atomic
operation when acquiring the write lock and reduce unnecessary
cacheline contention.

Signed-off-by: Jason Low <jason.l...@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Acked-by: Waiman Long<waiman.l...@hpe.com>
Acked-by: Davidlohr Bueso <d...@stgolabs.net>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Arnd Bergmann <a...@arndb.de>
Cc: Christoph Lameter <c...@linux.com>
Cc: Fenghua Yu <fenghua...@intel.com>
Cc: Heiko Carstens <heiko.carst...@de.ibm.com>
Cc: Ivan Kokshaysky <i...@jurassic.park.msu.ru>
Cc: Jason Low <jason.l...@hp.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Martin Schwidefsky <schwidef...@de.ibm.com>
Cc: Matt Turner <matts...@gmail.com>
Cc: Paul E. McKenney <paul...@linux.vnet.ibm.com>
Cc: Peter Hurley <pe...@hurleysoftware.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Richard Henderson <r...@twiddle.net>
Cc: Terry Rudd <terry.r...@hpe.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Tim Chen <tim.c.c...@linux.intel.com>
Cc: Tony Luck <tony.l...@intel.com>
Link: 
http://lkml.kernel.org/r/1463445486-16078-2-git-send-email-jason.l...@hpe.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 kernel/locking/rwsem-xadd.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index fcbf75a..b957da7 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -261,17 +261,28 @@ struct rw_semaphore __sched 
*rwsem_down_read_failed(struct rw_semaphore *sem)
 }
 EXPORT_SYMBOL(rwsem_down_read_failed);
 
+/*
+ * This function must be called with the sem->wait_lock held to prevent
+ * race conditions between checking the rwsem wait list and setting the
+ * sem->count accordingly.
+ */
 static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
 {
        /*
-        * Try acquiring the write lock. Check count first in order
-        * to reduce unnecessary expensive cmpxchg() operations.
+        * Avoid trying to acquire write lock if count isn't RWSEM_WAITING_BIAS.
         */
-       if (count == RWSEM_WAITING_BIAS &&
-           cmpxchg_acquire(&sem->count, RWSEM_WAITING_BIAS,
-                   RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
-               if (!list_is_singular(&sem->wait_list))
-                       rwsem_atomic_update(RWSEM_WAITING_BIAS, sem);
+       if (count != RWSEM_WAITING_BIAS)
+               return false;
+
+       /*
+        * Acquire the lock by trying to set it to ACTIVE_WRITE_BIAS. If there
+        * are other tasks on the wait list, we need to add on WAITING_BIAS.
+        */
+       count = list_is_singular(&sem->wait_list) ?
+                       RWSEM_ACTIVE_WRITE_BIAS :
+                       RWSEM_ACTIVE_WRITE_BIAS + RWSEM_WAITING_BIAS;
+
+       if (cmpxchg_acquire(&sem->count, RWSEM_WAITING_BIAS, count) == 
RWSEM_WAITING_BIAS) {
                rwsem_set_owner(sem);
                return true;
        }

Reply via email to