On Mon, Apr 01, 2019 at 11:06:53AM +0200, Peter Zijlstra wrote:
> On Fri, Mar 29, 2019 at 11:20:04AM -0400, Alex Kogan wrote:
> > diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
> > index bc6d3244e1af..71ee4b64c5d4 100644
> > --- a/kernel/locking/mcs_spinlock.h
> > +++ b/kernel/locking/mcs_spinlock.h
> > @@ -17,8 +17,18 @@
> >  
> >  struct mcs_spinlock {
> >     struct mcs_spinlock *next;
> > +#ifndef CONFIG_NUMA_AWARE_SPINLOCKS
> >     int locked; /* 1 if lock acquired */
> >     int count;  /* nesting count, see qspinlock.c */
> > +#else /* CONFIG_NUMA_AWARE_SPINLOCKS */
> > +   uintptr_t locked; /* 1 if lock acquired, 0 if not, other values */
> > +                     /* represent a pointer to the secondary queue head */
> > +   u32 node_and_count;     /* node id on which this thread is running */
> > +                           /* with two lower bits reserved for nesting */
> > +                           /* count, see qspinlock.c */
> > +   u32 encoded_tail; /* encoding of this node as the main queue tail */
> > +   struct mcs_spinlock *tail;    /* points to the secondary queue tail */
> > +#endif /* CONFIG_NUMA_AWARE_SPINLOCKS */
> >  };
> 
> Please, have another look at the paravirt code, in particular at struct
> pv_node and its usage. This is horrible.

Thing is, this turns into a right mess when you also have PV spinlocks
on.

One thing we could maybe do is change locked and count to u8, then your
overlay structure could be something like:

struct mcs_spinlock {
        struct mcs_spinlock *next;
        u8 locked;
        u8 count;
};

struct cna_node {
        /* struct mcs_spinlock overlay */
        struct mcs_spinlock *next;
        u8 locked;
        u8 count;

        /* our CNA bits, consuming the slack and PV space */
        u16 node;
        u32 encoded_tail;
        struct mcs_spinlock *head;
        struct mcs_spinlock *tail;
};

Then you also don't need the first two patches.

Reply via email to