The osq_lock() and osq_unlock() function may not provide the necessary
acquire and release barrier in some cases. This patch makes sure
that the proper barriers are provided when osq_lock() is successful
or when osq_unlock() is called.

The change on the unlock side is more for documentation purpose than
is actually needed.

Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Waiman Long <waiman.l...@hpe.com>
---
 kernel/locking/osq_lock.c |    7 ++++++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index 05a3785..d957b90 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -124,6 +124,11 @@ bool osq_lock(struct optimistic_spin_queue *lock)
 
                cpu_relax_lowlatency();
        }
+       /*
+        * Add an acquire memory barrier for pairing with the release barrier
+        * in unlock.
+        */
+       smp_acquire__after_ctrl_dep();
        return true;
 
 unqueue:
@@ -198,7 +203,7 @@ void osq_unlock(struct optimistic_spin_queue *lock)
         * Second most likely case.
         */
        node = this_cpu_ptr(&osq_node);
-       next = xchg(&node->next, NULL);
+       next = xchg_release(&node->next, NULL);
        if (next) {
                WRITE_ONCE(next->locked, 1);
                return;
-- 
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to