On Thu, 2013-01-03 at 10:32 -0500, Steven Rostedt wrote: > On Thu, 2013-01-03 at 05:35 -0800, Eric Dumazet wrote: > > On Thu, 2013-01-03 at 08:24 -0500, Steven Rostedt wrote: > > > On Thu, 2013-01-03 at 09:05 +0000, Jan Beulich wrote: > > > > > > > > How much bus traffic do monitor/mwait cause behind the scenes? > > > > > > > > I would suppose that this just snoops the bus for writes, but the > > > > amount of bus traffic involved in this isn't explicitly documented. > > > > > > > > One downside of course is that unless a spin lock is made occupy > > > > exactly a cache line, false wakeups are possible. > > > > > > And that would probably be very likely, as the whole purpose of Rik's > > > patches was to lower cache stalls due to other CPUs pounding on spin > > > locks that share the cache line of what is being protected (and > > > modified). > > > > A monitor/mwait would be an option only if using MCS (or K42 variant) > > locks, where each cpu would wait on a private and dedicated cache line. > > > But then would the problem even exist? If the lock is on its own cache > line, it shouldn't cause a performance issue if other CPUs are spinning > on it. Would it?
Not sure I understand the question. The lock itself would not consume a whole cache line, only the items chained on it would be percpu, and cache line aligned. http://www.cs.rochester.edu/research/synchronization/pseudocode/ss.html#mcs Instead of spinning in : repeat while I->next = nil This part could use monitor/mwait But : 1) We dont have such lock implementation 2) Trying to save power while waiting on a spinlock would be a clear sign something is wrong in the implementation. A spinlock should not protect a long critical section. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/