On Wed, 2012-12-26 at 22:07 -0800, Michel Lespinasse wrote: > If we go with per-spinlock tunings, I feel we'll most likely want to > add an associative cache in order to avoid the 1/16 chance (~6%) of > getting 595Mbit/s instead of 982Mbit/s when there is a hash collision. > > I would still prefer if we could make up something that didn't require > per-spinlock tunings, but it's not clear if that'll work. At least we > now know of a simple enough workload to figure it out :)
Even with a per spinlock tuning, we can find workloads where holding time depends on the context. For example, complex qdisc hierarchy typically use different times on enqueue and dequeue operations. So the hash sounds good to me, because the hash key could mix both lock address and caller IP ( __builtin_return_address(1) in ticket_spin_lock_wait()) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/