On Wed, 2013-02-13 at 14:05 -0500, Steven Rostedt wrote: > That is, the CPU is about to go idle, thus a load balance is done, and > perhaps a task is pulled to the current queue. To do this, rq locks > and > such need to be grabbed across CPUs.
Right, grabbing the rq locks and all isn't my main worry, we do that either case, but my worry was the two extra switches we do for no good reason at all. Now its not as if we'll actually run the idle thread, that would be very expensive indeed, so its just the two context_switch() calls, but still, I somehow remember us spending quite a lot of effort to keep idle_balance where it is now, if only I could remember the benchmark we had for it :/ Can't you do the opposite and fold post_schedule() into idle_balance()? /me goes stare at code to help remember what the -rt requirements were again.. Ah, so that's push_rt_task() which wants to move extra rt tasks to other cpus. Doing that from where we have idle_balance() won't actually work I think since we might need to move current, which we cannot at that point -- I'm thinking a higher prio task (than current) waking to this cpu and then cascading current to another cpu, can that happen? If we never need to migrate current because we don't do the cascade by ensuring we wake the higher prio task to the approriate cpu we might just get away with it. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/