On Thu, 2014-04-24 at 12:00 +0200, Peter Zijlstra wrote: > On Wed, Apr 23, 2014 at 03:31:57PM -0700, Tim Chen wrote: > > The current code will call pick_next_task_fair a second time > > in the slow path if we did not pull any task in our first try. > > This is really unnecessary as we already know no task can > > be pulled and it doubles the delay for the cpu to enter idle. > > > > We instrumented some network workloads and that saw that > > pick_next_task_fair is frequently called twice before a cpu enters idle. > > The call to pick_next_task_fair can add > > non trivial latency as it calls load_balance which runs find_busiest_group > > on an hierarchy of sched domains spanning the cpus for a large system. For > > some 4 socket systems, we saw almost 0.25 msec spent per call > > of pick_next_task_fair before a cpu can be idled. > > > > This patch skips pick_next_task_fair in the slow path if it > > has already been invoked. > > How about something like so?
Yes, this version is more concise. > > Its a little more contained. > > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -2636,8 +2636,14 @@ pick_next_task(struct rq *rq, struct tas > if (likely(prev->sched_class == class && > rq->nr_running == rq->cfs.h_nr_running)) { > p = fair_sched_class.pick_next_task(rq, prev); > - if (likely(p && p != RETRY_TASK)) > - return p; > + if (unlikely(p == RETRY_TASK)) > + goto again; > + > + /* assumes fair_sched_class->next == idle_sched_class */ > + if (unlikely(!p)) > + p = pick_next_task_idle(rq, prev); Should be p = idle_sched_class.pick_next_task(rq, prev); > + > + return p; > } > > again: I'll respin the patch with these changes. Thanks. Tim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/