From: Nicolai Hähnle <nicolai.haeh...@amd.com> There's a possible race where the waiter in front of us leaves the wait list due to a signal, and the current owner subsequently hands the lock off to us even though we never observed ourselves at the front of the list.
Set the task state before checking our position in the list, so that the race is handled by falling through the next schedule(). Found by inspection. Cc: Peter Zijlstra <peterz at infradead.org> Cc: Ingo Molnar <mingo at redhat.com> Cc: dri-devel at lists.freedesktop.org Signed-off-by: Nicolai Hähnle <nicolai.haehnle at amd.com> --- kernel/locking/mutex.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 9b34961..c02c566 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -697,17 +697,18 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, spin_unlock_mutex(&lock->wait_lock, flags); schedule_preempt_disabled(); - if (!first && __mutex_waiter_is_first(lock, &waiter)) { - first = true; - __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF); - } - set_task_state(task, state); /* * Here we order against unlock; we must either see it change * state back to RUNNING and fall through the next schedule(), * or we must see its unlock and acquire. */ + + if (!first && __mutex_waiter_is_first(lock, &waiter)) { + first = true; + __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF); + } + if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) || __mutex_trylock(lock, first)) break; -- 2.7.4