On Mon, 2019-03-11 at 13:52 +0800, Joyce Kong wrote: > The spinlock implementation is unfair, some threads may take locks > aggressively while leaving the other threads starving for long time. > > This patch introduces ticketlock which gives each waiting thread a > ticket and they can take the lock one by one. First come, first > serviced. > This avoids starvation for too long time and is more predictable. > > Suggested-by: Jerin Jacob <jer...@marvell.com> > Signed-off-by: Joyce kong <joyce.k...@arm.com> > Reviewed-by: Gavin Hu <gavin...@arm.com> > Reviewed-by: Ola Liljedahl <ola.liljed...@arm.com> > Reviewed-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com> > --- > diff --git a/MAINTAINERS b/MAINTAINERS > index 097cfb4..12a091f 100644 > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -210,6 +210,10 @@ M: Cristian Dumitrescu < > cristian.dumitre...@intel.com> > F: lib/librte_eal/common/include/rte_bitmap.h > F: app/test/test_bitmap.c > > +Ticketlock > +M: Joyce Kong <joyce.k...@arm.com> > +F: lib/librte_eal/common/include/generic/rte_ticketlock.h
Add F: app/test/test_ticketlock.c in the next patch > > +#include <rte_lcore.h> > +#include <rte_common.h> > +#include <rte_pause.h> Sort the header in alphabetical order. > + > +/** > + * The rte_ticketlock_t type. > + */ > +typedef struct { > + uint16_t current; > + uint16_t next; > +} rte_ticketlock_t; > + > > + > +/** > + * Take the ticketlock. > + * > + * @param tl > + * A pointer to the ticketlock. > + */ > +static inline __rte_experimental void > +rte_ticketlock_lock(rte_ticketlock_t *tl) > +{ > + unsigned int me = __atomic_fetch_add(&tl->next, 1, If current, next is uint16_t why "me" as unsigned int. > __ATOMIC_RELAXED); > + while (__atomic_load_n(&tl->current, __ATOMIC_ACQUIRE) != me) > + rte_pause(); > +} > + > +/** > + * Release the ticketlock. > + * > + * @param tl > + * A pointer to the ticketlock. > + */ > +static inline __rte_experimental void > +rte_ticketlock_unlock(rte_ticketlock_t *tl) > +{ > + unsigned int i = __atomic_load_n(&tl->current, > __ATOMIC_RELAXED); > + i++; You can save this line by making __atomic_store_n(&tl->current, i + 1, __ATOMIC_RELEASE); The code looks good. Please check the above comments and earlier reported compilation issue with clang 7.1