On Tue, 26 Jan 2016, Mike Galbraith wrote:

> On Tue, 2016-01-26 at 10:26 -0600, Christoph Lameter wrote:
> > On Tue, 26 Jan 2016, Mike Galbraith wrote:
> >
> > > > Why would the deferring cause this overhead?
> > >
> > > Because we schedule to idle cores aggressively, thus we may pop in and
> > > out of idle at high frequency.
> >
> > Whats the point of going idle if you have things to do soon?
>
> When a task schedules off, how do you know it'll be back at all, much
> less soon?

Ok so you are running an artificial benchmark that always gets the
system running again when it decides to go idle?

Reply via email to