Hi Dietmar,

On Fri, Jul 24, 2015 at 05:41:35PM +0100, Dietmar Eggemann wrote:
> Hi Yuyang,
> 
> On 15/07/15 01:04, Yuyang Du wrote:
> 
> [...]
> 
> > @@ -4674,7 +4487,7 @@ static long effective_load(struct task_group *tg, int 
> > cpu, long wl, long wg)
> >                 /*
> >                  * w = rw_i + @wl
> >                  */
> > -               w = se->my_q->load.weight + wl;
> > +               w = se->my_q->avg.load_avg + wl;
> > 
> >                 /*
> >                  * wl = S * s'_i; see (2)
> 
> There is a comment 'Per the above, wl is the new *se->load.weight*
> value'. This should be replaced by *se->avg.load_avg*. Also the function
> header explains the functionality of effective_load() based on weight
> and not sched_avg::load_avg.

I think it is already replaced when effective_load is called.
 
About load.weight vs. load_avg, see below.

> > @@ -4695,7 +4508,7 @@ static long effective_load(struct task_group *tg, int 
> > cpu, long wl, long wg)
> >                 /*
> >                  * wl = dw_i = S * (s'_i - s_i); see (3)
> >                  */
> > -               wl -= se->load.weight;
> > +               wl -= se->avg.load_avg;
> > 
> >                 /*
> >                  * Recursively apply this logic to all parent groups to 
> > compute
> > @@ -4769,14 +4582,14 @@ static int wake_affine(struct sched_domain *sd, 
> > struct task_struct *p, int sync)
> >          */
> >         if (sync) {
> >                 tg = task_group(current);
> > -               weight = current->se.load.weight;
> > +               weight = current->se.avg.load_avg;
> > 
> >                 this_load += effective_load(tg, this_cpu, -weight, -weight);
> >                 load += effective_load(tg, prev_cpu, 0, -weight);
> >         }
> > 
> >         tg = task_group(p);
> > -       weight = p->se.load.weight;
> > +       weight = p->se.avg.load_avg;
> 
> You changed cfs_rq->load.weight to cfs_rq->avg.load_avg and
> se->load.weight to se->avg.load_avg in effective_load() and
> wake_affine() in v2.
> I wasn't able to find explanation why you did this. I mean we still have
> to maintain 'struct load_weight' on cfs_rq's and se's representing tg's.

Yes, I might not have explained it specifically, but back then, it was
simply motivated/reasoned by consistently expressing the load with load_avg.

As of now, it is sort of the same, adding as I previously stated, as far
as group SE is concerned, we use load_avg, instread of runnable_load_avg
or load.weight.

As was also suggested by Morten, we need to revisit the bulk of the load
balancing code a lot, including rethinking about what to use: load.weight,
or runnable_load_avg, or load_avg. I think this patch series is just a
starter.

Thanks,
Yuyang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to