There is no agreed-upon definition of spin_unlock_wait()'s semantics, and it appears that all callers could do just as well with a lock/unlock pair. This commit therefore replaces the spin_unlock_wait() call in do_task_dead() with spin_lock() followed immediately by spin_unlock(). This should be safe from a performance perspective because the lock is this tasks ->pi_lock, and this is called only after the task exits.
Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com> Cc: Ingo Molnar <mi...@redhat.com> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Will Deacon <will.dea...@arm.com> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Alan Stern <st...@rowland.harvard.edu> Cc: Andrea Parri <parri.and...@gmail.com> Cc: Linus Torvalds <torva...@linux-foundation.org> --- kernel/sched/core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e91138fcde86..6dea3d9728c8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3461,7 +3461,8 @@ void __noreturn do_task_dead(void) * is held by try_to_wake_up() */ smp_mb(); - raw_spin_unlock_wait(¤t->pi_lock); + raw_spin_lock(¤t->pi_lock); + raw_spin_unlock(¤t->pi_lock); /* Causes final put_task_struct in finish_task_switch(): */ __set_current_state(TASK_DEAD); -- 2.5.2