Commit-ID:  a338ecb07a338c9a8b0ca0010e862ebe598b1551
Gitweb:     https://git.kernel.org/tip/a338ecb07a338c9a8b0ca0010e862ebe598b1551
Author:     Waiman Long <long...@redhat.com>
AuthorDate: Thu, 4 Apr 2019 13:43:13 -0400
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Wed, 10 Apr 2019 10:56:01 +0200

locking/rwsem: Micro-optimize rwsem_try_read_lock_unqueued()

The atomic_long_cmpxchg_acquire() in rwsem_try_read_lock_unqueued() is
replaced by atomic_long_try_cmpxchg_acquire() to simpify the code and
generate slightly better assembly code.

There is no functional change.

Signed-off-by: Waiman Long <long...@redhat.com>
Acked-by: Peter Zijlstra <a.p.zijls...@chello.nl>
Acked-by: Will Deacon <will.dea...@arm.com>
Acked-by: Davidlohr Bueso <dbu...@suse.de>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Arnd Bergmann <a...@arndb.de>
Cc: Borislav Petkov <b...@alien8.de>
Cc: Davidlohr Bueso <d...@stgolabs.net>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Paul E. McKenney <paul...@linux.vnet.ibm.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Tim Chen <tim.c.c...@linux.intel.com>
Link: http://lkml.kernel.org/r/20190404174320.22416-5-long...@redhat.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 kernel/locking/rwsem-xadd.c | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index c213869e1aa7..f6198e1a58f6 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -259,21 +259,16 @@ static inline bool rwsem_try_write_lock(long count, 
struct rw_semaphore *sem)
  */
 static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
 {
-       long old, count = atomic_long_read(&sem->count);
+       long count = atomic_long_read(&sem->count);
 
-       while (true) {
-               if (!(count == 0 || count == RWSEM_WAITING_BIAS))
-                       return false;
-
-               old = atomic_long_cmpxchg_acquire(&sem->count, count,
-                                     count + RWSEM_ACTIVE_WRITE_BIAS);
-               if (old == count) {
+       while (!count || count == RWSEM_WAITING_BIAS) {
+               if (atomic_long_try_cmpxchg_acquire(&sem->count, &count,
+                                       count + RWSEM_ACTIVE_WRITE_BIAS)) {
                        rwsem_set_owner(sem);
                        return true;
                }
-
-               count = old;
        }
+       return false;
 }
 
 static inline bool owner_on_cpu(struct task_struct *owner)

Reply via email to