On Thu, May 19, 2016 at 10:39:26PM -0700, Davidlohr Bueso wrote: > However, this is semantically different to > what was previously done with ticket locks in that spin_unlock_wait() will > always observe > all waiters by adding itself to the tail.
static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) { __ticket_t head = READ_ONCE(lock->tickets.head); for (;;) { struct __raw_tickets tmp = READ_ONCE(lock->tickets); /* * We need to check "unlocked" in a loop, tmp.head == head * can be false positive because of overflow. */ if (__tickets_equal(tmp.head, tmp.tail) || !__tickets_equal(tmp.head, head)) break; cpu_relax(); } } I'm not seeing that (although I think I agreed yesterday on IRC). Note how we observe the head and then loop until either the lock is unlocked (head == tail) or simply head isn't what it used to be. And head is the lock holder end of the queue; see arch_spin_unlock() incrementing it. So the ticket lock too should only wait for the current lock holder to go away, not any longer.