swake_up and swake_up_all test the swaitqueue outside the lock,
but they are missing the barrier that would ensure visibility
of a previous store that sets the wakeup condition with the
load that tests the swaitqueue. This could lead to a lost wakeup
if there is memory reordering. Fix this as prescribed by the
waitqueue_active comments.

Signed-off-by: Nicholas Piggin <npig...@gmail.com>
--
I noticed this when chasing down that rcu hang bug (which
turned out to not be anything of the sort). I might be missing
something here and it's safe somehow, but if so then it should
have a comment where it diverges from normal waitqueues.

It looks like there's a few callers which are also testing
swait_active before swake_up without a barrier which look wrong,
so I must be missing something but I'm not sure what.

Thanks,
Nick
---
 kernel/sched/swait.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c
index 3d5610dcce11..9056278001d9 100644
--- a/kernel/sched/swait.c
+++ b/kernel/sched/swait.c
@@ -33,6 +33,11 @@ void swake_up(struct swait_queue_head *q)
 {
        unsigned long flags;
 
+       /*
+        * See waitqueue_active() comments for checking waiters outside
+        * the lock. Same principle applies here.
+        */
+       smp_mb();
        if (!swait_active(q))
                return;
 
@@ -51,6 +56,11 @@ void swake_up_all(struct swait_queue_head *q)
        struct swait_queue *curr;
        LIST_HEAD(tmp);
 
+       /*
+        * See waitqueue_active() comments for checking waiters outside
+        * the lock. Same principle applies here.
+        */
+       smp_mb();
        if (!swait_active(q))
                return;
 
-- 
2.13.3

Reply via email to