On Wed, 2014-06-11 at 11:37 -0700, Jason Low wrote:
> The mutex_trylock() function calls into __mutex_trylock_fastpath() when
> trying to obtain the mutex. On 32 bit x86, in the !__HAVE_ARCH_CMPXCHG
> case, __mutex_trylock_fastpath() calls directly into 
> __mutex_trylock_slowpath()
> regardless of whether or not the mutex is locked.
> 
> In __mutex_trylock_slowpath(), we then acquire the wait_lock spinlock, xchg()
> lock->count with -1, then set lock->count back to 0 if there are no waiters,
> and return true if the prev lock count was 1.
> 
> However, if the mutex is already locked, then there isn't much point
> in attempting all of the above expensive operations. In this patch, we only
> attempt the above trylock operations if the mutex is unlocked.
> 
> Signed-off-by: Jason Low <jason.l...@hp.com>

This is significantly cleaner than the v1 patch.

Reviewed-by: Davidlohr Bueso <davidl...@hp.com>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to