On Mon, 2016-02-15 at 18:55 -0500, Waiman Long wrote: > On 02/12/2016 03:40 PM, Peter Zijlstra wrote: > > On Fri, Feb 12, 2016 at 12:32:12PM -0500, Waiman Long wrote: > >> @@ -358,8 +373,8 @@ static bool mutex_optimistic_spin(struct mutex *lock, > >> } > >> > >> mutex_set_owner(lock); > >> - osq_unlock(&lock->osq); > >> - return true; > >> + acquired = true; > >> + break; > >> } > >> > >> /* > >> @@ -380,7 +395,10 @@ static bool mutex_optimistic_spin(struct mutex *lock, > >> cpu_relax_lowlatency(); > >> } > >> > >> - osq_unlock(&lock->osq); > >> + if (!waiter) > >> + osq_unlock(&lock->osq); > >> + if (acquired || waiter) > >> + return acquired; > >> done: > >> /* > >> * If we fell out of the spin path because of need_resched(), > > Is there a reason to not also preempt in the wait-loop? Surely the same > > reason is still valid there too? > > The waiter does check for need_sched(). So it will break out of the loop > and return false in this case. This causes the waiter to loop back and > goes to sleep if the lock can't be acquired. That is why I don't think > we need to do another schedule_preempt_disabled() here.
The purpose of the additional reschedule point is to avoid delaying preemption, which still applies if the spinner is a waiter. If it is a waiter, the difference is that the delay isn't as long since it doesn't need to be added to the wait_list. Nonetheless, preemption delays can still occur, so I think the additional preemption point should also be there in the waiter case.