On Wed, 2016-05-11 at 03:16 +0800, Yuyang Du wrote:

> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3027,6 +3027,9 @@ void remove_entity_load_avg(struct sched
> >  
> >  static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq)
> >  {
> > +> >        > > if (sched_feat(LB_TIP_AVG_HIGH) && cfs_rq->load.weight > 
> > cfs_rq->runnable_load_avg*2)
> > +> >        > >     > > return cfs_rq->runnable_load_avg + min_t(unsigned 
> > long, NICE_0_LOAD,
> > +> >        > >     > >     > >     > >     > >     > >     > >  
> > cfs_rq->load.weight/2);
> >  > >        > > return cfs_rq->runnable_load_avg;
> >  }
>   
> cfs_rq->runnable_load_avg is for sure no greater than (in this case much less
> than, maybe 1/2 of) load.weight, whereas load_avg is not necessarily a rock
> in gearbox that only impedes speed up, but also speed down.

BTW, the reason hack helped is that the long (30ms) sleep/run cycle of
the benchmark's default settings causes large amplitude sawtooth of
load numbers (~300 - ~700 range), dinging up load delta resolvability.

        -Mike

Reply via email to