On Thu, 2013-01-03 at 08:10 -0800, Eric Dumazet wrote: > > But then would the problem even exist? If the lock is on its own cache > > line, it shouldn't cause a performance issue if other CPUs are spinning > > on it. Would it? > > Not sure I understand the question. >
I'll explain my question better. I thought the whole point of Rik's patches was to solve a performance problem caused by contention on a lock that shares a cache line with data. In the ideal case, locks wont be contented, and are taken and released quickly (being from the RT world, I know this isn't true :-( ). In this case, it's also advantageous to keep the lock on the same cache line as the data that's being updated. This way, the process of grabbing the lock also pulls in the data that you will soon be using. But then the problem occurs when you have a bunch of other CPUs trying to take this lock in a tight spin. Every time the owner of the lock touches the data, the other CPUs doing a LOCK read on the spinlock will cause bus contention on the owner CPU as the data shares the cache and needs to be synced. As the owner CPU just touched the cache line that is under a tight loop of LOCK reads on other CPUs. By adding the delays, the CPU with the lock doesn't stall at every update of the data protected by the lock. Thus, if monitor/mwait is ideal only for locks on its own cache line, then they are pointless for the locks that are causing the issue we are trying to fix. -- Steve -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/