On Tue, Apr 13, 2021 at 10:03:01AM +0200, Peter Zijlstra wrote: > For ticket locks you really only needs atomic_fetch_add() and > smp_store_release() and an architectural guarantees that the > atomic_fetch_add() has fwd progress under contention and that a sub-word > store (through smp_store_release()) will fail the SC. > > Then you can do something like: > > void lock(atomic_t *lock) > { > u32 val = atomic_fetch_add(1<<16, lock); /* SC, gives us RCsc */ > u16 ticket = val >> 16; > > for (;;) { > if (ticket == (u16)val) > break; > cpu_relax(); > val = atomic_read_acquire(lock); > }
A possibly better might be: if (ticket == (u16)val) return; atomic_cond_read_acquire(lock, ticket == (u16)VAL); Since that allows architectures to use WFE like constructs. > } > > void unlock(atomic_t *lock) > { > u16 *ptr = (u16 *)lock + (!!__BIG_ENDIAN__); > u32 val = atomic_read(lock); > > smp_store_release(ptr, (u16)val + 1); > } > > That's _almost_ as simple as a test-and-set :-) It isn't quite optimal > on x86 for not being allowed to use a memop on unlock, since its being > forced into a load-store because of all the volatile, but whatever.