On 02/07/2014 07:10 AM, Daniel Lezcano wrote: > The scheduler main function 'schedule()' checks if there are no more tasks > on the runqueue. Then it checks if a task should be pulled in the current > runqueue in idle_balance() assuming it will go to idle otherwise. > > But the idle_balance() releases the rq->lock in order to lookup in the sched > domains and takes the lock again right after. That opens a window where > another cpu may put a task in our runqueue, so we won't go to idle but > we have filled the idle_stamp, thinking we will. > > This patch closes the window by checking if the runqueue has been modified > but without pulling a task after taking the lock again, so we won't go to idle > right after in the __schedule() function. > > Cc: alex....@linaro.org > Cc: pet...@infradead.org > Cc: mi...@kernel.org > Signed-off-by: Daniel Lezcano <daniel.lezc...@linaro.org> > Signed-off-by: Peter Zijlstra <pet...@infradead.org> > --- > kernel/sched/fair.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 428bc9d..5ebc681 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -6589,6 +6589,13 @@ void idle_balance(struct rq *this_rq) > > raw_spin_lock(&this_rq->lock); > > + /* > + * While browsing the domains, we released the rq lock. > + * A task could have be enqueued in the meantime > + */
Mind to move the following line up to here? if (curr_cost > this_rq->max_idle_balance_cost) this_rq->max_idle_balance_cost = curr_cost; > + if (this_rq->nr_running && !pulled_task) > + return; > + > if (pulled_task || time_after(jiffies, this_rq->next_balance)) { > /* > * We are going idle. next_balance may be set based on > -- Thanks Alex -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/