Hi, another minor comment below. :-)
On 01/06/16 20:39, Dietmar Eggemann wrote: > The information whether a se/cfs_rq should get its load and > utilization (se representing a task and root cfs_rq) or only its load > (se representing a task group and cfs_rq owned by this se) updated can > be passed into __update_load_avg() to avoid the additional if/else > condition to set update_util. > > @running is changed to @update_util which now carries the information if > the utilization of the se/cfs_rq should be updated and if the se/cfs_rq > is running or not. > > Signed-off-by: Dietmar Eggemann <dietmar.eggem...@arm.com> > --- > kernel/sched/fair.c | 42 +++++++++++++++++++++--------------------- > 1 file changed, 21 insertions(+), 21 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 3ae8e79fb687..a1c13975cf56 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -2669,6 +2669,10 @@ static u32 __compute_runnable_contrib(u64 n) > > #define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) > > +#define upd_util_se(se, rng) ((entity_is_task(se) << 1) | (rng)) > +#define upd_util_cfs_rq(cfs_rq) \ > + (((&rq_of(cfs_rq)->cfs == cfs_rq) << 1) | !!cfs_rq->curr) > + > /* > * We can represent the historical contribution to runnable average as the > * coefficients of a geometric series. To do this we sub-divide our runnable > @@ -2699,13 +2703,12 @@ static u32 __compute_runnable_contrib(u64 n) > */ > static __always_inline int > __update_load_avg(u64 now, int cpu, struct sched_avg *sa, > - unsigned long weight, int running, struct cfs_rq *cfs_rq) > + unsigned long weight, int update_util, struct cfs_rq *cfs_rq) > { > u64 delta, scaled_delta, periods; > u32 contrib; > unsigned int delta_w, scaled_delta_w, decayed = 0; > unsigned long scale_freq, scale_cpu; > - int update_util = 0; > > delta = now - sa->last_update_time; > /* > @@ -2726,12 +2729,6 @@ __update_load_avg(u64 now, int cpu, struct sched_avg > *sa, > return 0; > sa->last_update_time = now; > > - if (cfs_rq) { > - if (&rq_of(cfs_rq)->cfs == cfs_rq) > - update_util = 1; > - } else if (entity_is_task(container_of(sa, struct sched_entity, avg))) > - update_util = 1; > - > scale_freq = arch_scale_freq_capacity(NULL, cpu); > scale_cpu = arch_scale_cpu_capacity(NULL, cpu); > > @@ -2757,7 +2754,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg > *sa, > weight * scaled_delta_w; > } > } > - if (update_util && running) > + if (update_util == 0x3) How about a define for these masks? Best, - Juri