There is no agreed-upon definition of spin_unlock_wait()'s semantics, and it appears that all callers could do just as well with a lock/unlock pair. This commit therefore removes the underlying arch-specific arch_spin_unlock_wait().
Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com> Cc: Chris Zankel <ch...@zankel.net> Cc: Max Filippov <jcmvb...@gmail.com> Cc: <linux-xte...@linux-xtensa.org> --- arch/xtensa/include/asm/spinlock.h | 5 ----- 1 file changed, 5 deletions(-) diff --git a/arch/xtensa/include/asm/spinlock.h b/arch/xtensa/include/asm/spinlock.h index a36221cf6363..3bb49681ee24 100644 --- a/arch/xtensa/include/asm/spinlock.h +++ b/arch/xtensa/include/asm/spinlock.h @@ -33,11 +33,6 @@ #define arch_spin_is_locked(x) ((x)->slock != 0) -static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) -{ - smp_cond_load_acquire(&lock->slock, !VAL); -} - #define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) static inline void arch_spin_lock(arch_spinlock_t *lock) -- 2.5.2