On 08/04/2016 07:09 AM, Peter Zijlstra wrote:
On Wed, Aug 03, 2016 at 02:51:23PM -0700, Bart Van Assche wrote:
So I started testing the patch below that should fix the same hang but
without triggering any wait list corruption.

diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
index f15d6b6..4e3f651 100644
--- a/kernel/sched/wait.c
+++ b/kernel/sched/wait.c
@@ -282,7 +282,7 @@ void abort_exclusive_wait(wait_queue_head_t *q,
wait_queue_t *wait,
        spin_lock_irqsave(&q->lock, flags);
        if (!list_empty(&wait->task_list))
                list_del_init(&wait->task_list);
-       else if (waitqueue_active(q))
+       if (waitqueue_active(q))
                __wake_up_locked_key(q, mode, key);
        spin_unlock_irqrestore(&q->lock, flags);
 }

So the problem with this patch is that it will violate the nr_exclusive
semantics in that it can result in too many wakeups -- which is a much
less severe (typically harmless) issue.

We now always wake up the next waiter, even if there wasn't an actual
wakeup we raced against. And if we then also get a wakeup, we can end up
with 2 woken tasks (instead of the nr_exclusive=1).

Now, since wait loops must all deal with spurious wakeups, this ends up
as harmless overhead.

How about adding a fifth argument to abort_exclusive_wait() that indicates whether or not the "if (waitqueue_active(q)) __wake_up_locked_key(q, mode, key)" code should be executed? __wait_event() could pass "condition" as fifth argument when calling abort_exclusive_wait().

But I'd still like to understand where we loose the wakeup.

My assumption is that __wake_up_common() and signal delivery happen concurrently, that __wake_up_common() wakes up bit_wait_io() and that signal delivery happens after bit_wait_io() has been woken up but before it tests the signal pending state.

Bart.

Reply via email to