On to, 2016-07-21 at 15:29 -0700, Jason Low wrote: > On Wed, 2016-07-20 at 14:37 -0400, Waiman Long wrote: > > On 07/20/2016 12:39 AM, Jason Low wrote: > > > On Tue, 2016-07-19 at 16:04 -0700, Jason Low wrote: > > > > Hi Imre, > > > > > > > > Here is a patch which prevents a thread from spending too much > > > > "time" > > > > waiting for a mutex in the !CONFIG_MUTEX_SPIN_ON_OWNER case. > > > > > > > > Would you like to try this out and see if this addresses the > > > > mutex > > > > starvation issue you are seeing in your workload when > > > > optimistic > > > > spinning is disabled? > > > Although it looks like it didn't take care of the 'lock stealing' > > > case > > > in the slowpath. Here is the updated fixed version: > > > > > > --- > > > Signed-off-by: Jason Low<jason.l...@hpe.com> > > > --- > > > include/linux/mutex.h | 2 ++ > > > kernel/locking/mutex.c | 65 > > > ++++++++++++++++++++++++++++++++++++++++++++------ > > > 2 files changed, 60 insertions(+), 7 deletions(-) > > > > > > diff --git a/include/linux/mutex.h b/include/linux/mutex.h > > > index 2cb7531..c1ca68d 100644 > > > --- a/include/linux/mutex.h > > > +++ b/include/linux/mutex.h > > > @@ -57,6 +57,8 @@ struct mutex { > > > #endif > > > #ifdef CONFIG_MUTEX_SPIN_ON_OWNER > > > struct optimistic_spin_queue osq; /* Spinner MCS lock > > > */ > > > +#else > > > + bool yield_to_waiter; /* Prevent starvation when > > > spinning disabled */ > > > #endif > > > #ifdef CONFIG_DEBUG_MUTEXES > > > void *magic; > > > > You don't need that on non-SMP system. So maybe you should put it > > under > > #ifdef CONFIG_SMP block. > > Right, maybe something like: > > #ifdef CONFIG_MUTEX_SPIN_ON_OWNER > ... > ... > #elif !defined(CONFIG_SMP) /* If optimistic spinning disabled */ > bool yield_to_waiter; > #endif > > > > @@ -556,7 +595,8 @@ __mutex_lock_common(struct mutex *lock, long > > > state, unsigned int subclass, > > > * other waiters. We only attempt the xchg if > > > the count is > > > * non-negative in order to avoid unnecessary > > > xchg operations: > > > */ > > > - if (atomic_read(&lock->count)>= 0&& > > > + if ((!need_yield_to_waiter(lock) || loop> 1)&& > > > + atomic_read(&lock->count)>= 0&& > > > (atomic_xchg_acquire(&lock->count, -1) == 1)) > > > > > > > I think you need to reset the yield_to_waiter variable here when > > loop > > > 1 instead of at the end of the loop. > > So I think in the current state, only the top waiter would be able to > both set and clear the yield_to_waiter variable anyway. However, I > agree > that this detail is not obvious and it would be better to reset the > variable here when loop > 1 to make it more readable.
AFAICS an interruptible waiter behind the top waiter receiving a signal and grabbing the lock could also reset yield_to_waiter incorrectly in that way, increasing the top waiter's delay arbitrarily. --Imre > > > > break; > > > > > > @@ -581,6 +621,7 @@ __mutex_lock_common(struct mutex *lock, long > > > state, unsigned int subclass, > > > spin_unlock_mutex(&lock->wait_lock, flags); > > > schedule_preempt_disabled(); > > > spin_lock_mutex(&lock->wait_lock, flags); > > > + do_yield_to_waiter(lock, loop); > > > } > > > __set_task_state(task, TASK_RUNNING); > > > > > > @@ -590,6 +631,10 @@ __mutex_lock_common(struct mutex *lock, long > > > state, unsigned int subclass, > > > atomic_set(&lock->count, 0); > > > debug_mutex_free_waiter(&waiter); > > > > > > +#ifndef CONFIG_MUTEX_SPIN_ON_OWNER > > > + lock->yield_to_waiter = false; > > > +#endif > > > + > > > > Maybe you should do the reset in an inline function instead. > > Yes, this should be abstracted into a function like we do with > do_yield_to_waiter(). > > > Jason >