On Thu, Feb 07, 2013 at 07:48:33PM -0800, Michel Lespinasse wrote:
> On Thu, Feb 7, 2013 at 4:40 PM, Paul E. McKenney
> <paul...@linux.vnet.ibm.com> wrote:
> > On Thu, Feb 07, 2013 at 04:03:54PM -0800, Eric Dumazet wrote:
> >> It adds yet another memory write to store the node pointer in the
> >> lock...
> >>
> >> I suspect it's going to increase false sharing.
> >
> > On the other hand, compared to straight MCS, it reduces the need to
> > pass the node address around.  Furthermore, the node pointer is likely
> > to be in the same cache line as the lock word itself, and finally
> > some architectures can do a double-pointer store.
> >
> > Of course, it might well be slower, but it seems like it is worth
> > giving it a try.
> 
> Right. Another nice point about this approach is that there needs to
> be only one node per spinning CPU, so the node pointers (both tail and
> next) might be replaced with CPU identifiers, which would bring the
> spinlock size down to the same as with the ticket spinlock (which in
> turns makes it that much more likely that we'll have atomic stores of
> that size).

Good point!  I must admit that this is one advantage of having the
various _irq spinlock acquisition primitives disable irqs before
spinning.  ;-)

                                                        Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to