On 11/29/2018 01:08 PM, Peter Zijlstra wrote: > On Thu, Nov 29, 2018 at 06:27:00PM +0100, Peter Zijlstra wrote: >> wake_up_q() should, per the barriers in wake_up_process, ensure that if >> wake_a_add() fails, there will be a wakeup of that task after that >> point. >> >> So if we put wake_up_q() at the location where wake_up_process() should >> be, it should all work. >> >> The bug in question is that it can happen at any time after >> wake_q_add(), not necessarily at wake_up_q(). > Hmm, I think we're missing a barrier in wake_q_add(); when cmpxchg() > fails we still need an smp_mb(). > > Something like so. > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 3d87a28da378..69def558edf6 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -400,6 +400,13 @@ void wake_q_add(struct wake_q_head *head, struct > task_struct *task) > { > struct wake_q_node *node = &task->wake_q; > > + /* > + * Ensure, that when the below cmpxchg() fails, the corresponding > + * wake_up_q() will observe our prior state. > + * > + * Pairs with the smp_mb() from wake_up_q()'s wake_up_process(). > + */ > + smp_mb(); > /* > * Atomically grab the task, if ->wake_q is !nil already it means > * its already queued (either by us or someone else) and will get the > @@ -408,7 +415,7 @@ void wake_q_add(struct wake_q_head *head, struct > task_struct *task) > * This cmpxchg() executes a full barrier, which pairs with the full > * barrier executed by the wakeup in wake_up_q(). > */ > - if (cmpxchg(&node->next, NULL, WAKE_Q_TAIL)) > + if (cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)) > return; > > get_task_struct(task);
That can be costly for x86 which will now have 2 locked instructions. Should we introduce a kind of special cmpxchg (e.g. cmpxchg_mb) that will guarantee a memory barrier whether the operation fails or not? Cheers, Longman