On Mon, Apr 16, 2007 at 04:10:59PM -0700, Michael K. Edwards wrote: >> This observation of Peter's is the best thing to come out of this >> whole foofaraw. Looking at what's happening in CPU-land, I think it's >> going to be necessary, within a couple of years, to replace the whole >> idea of "CPU scheduling" with "run queue scheduling" across a complex, >> possibly dynamic mix of CPU-ish resources. Ergo, there's not much >> point in churning the mainline scheduler through a design that isn't >> significantly more flexible than any of those now under discussion.
On Tue, Apr 17, 2007 at 05:55:28AM +0200, Nick Piggin wrote: > Why? If you do that, then your load balancer just becomes less flexible > because it is harder to have tasks run on one or the other. On Tue, Apr 17, 2007 at 05:55:28AM +0200, Nick Piggin wrote: > You can have single-runqueue-per-domain behaviour (or close to) just by > relaxing all restrictions on idle load balancing within that domain. It > is harder to go the other way and place any per-cpu affinity or > restirctions with multiple cpus on a single runqueue. The big sticking point here is order-sensitivity. One can point to stringent sched_yield() ordering but that's not so important in and of itself. The more significant case is RT applications which are order- sensitive. Per-cpu runqueues rather significantly disturb the ordering requirements of applications that care about it. In terms of a plugging framework, the per-cpu arrangement precludes or makes extremely awkward scheduling policies that don't have per-cpu runqueues, for instance, the 2.4.x policy. There is also the alternate SMP scalability strategy of a lockless scheduler with a single global queue, which is more performance-oriented. -- wli - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/