On Fri, Mar 20, 2015 at 06:40:39PM +0000, Sai Gurrappadi wrote: > On 02/04/2015 10:31 AM, Morten Rasmussen wrote: > > +/* > > + * sched_group_energy(): Returns absolute energy consumption of cpus > > belonging > > + * to the sched_group including shared resources shared only by members of > > the > > + * group. Iterates over all cpus in the hierarchy below the sched_group > > starting > > + * from the bottom working it's way up before going to the next cpu until > > all > > + * cpus are covered at all levels. The current implementation is likely to > > + * gather the same usage statistics multiple times. This can probably be > > done in > > + * a faster but more complex way. > > + */ > > +static unsigned int sched_group_energy(struct sched_group *sg_top) > > +{ > > + struct sched_domain *sd; > > + int cpu, total_energy = 0; > > + struct cpumask visit_cpus; > > + struct sched_group *sg; > > + > > + WARN_ON(!sg_top->sge); > > + > > + cpumask_copy(&visit_cpus, sched_group_cpus(sg_top)); > > + > > + while (!cpumask_empty(&visit_cpus)) { > > + struct sched_group *sg_shared_cap = NULL; > > + > > + cpu = cpumask_first(&visit_cpus); > > + > > + /* > > + * Is the group utilization affected by cpus outside this > > + * sched_group? > > + */ > > + sd = highest_flag_domain(cpu, SD_SHARE_CAP_STATES); > > + if (sd && sd->parent) > > + sg_shared_cap = sd->parent->groups; > > + > > + for_each_domain(cpu, sd) { > > + sg = sd->groups; > > + > > + /* Has this sched_domain already been visited? */ > > + if (sd->child && cpumask_first(sched_group_cpus(sg)) != > > cpu) > > + break; > > + > > + do { > > + struct sched_group *sg_cap_util; > > + unsigned group_util; > > + int sg_busy_energy, sg_idle_energy; > > + int cap_idx; > > + > > + if (sg_shared_cap && > > sg_shared_cap->group_weight >= sg->group_weight) > > + sg_cap_util = sg_shared_cap; > > + else > > + sg_cap_util = sg; > > + > > + cap_idx = find_new_capacity(sg_cap_util, > > sg->sge); > > + group_util = group_norm_usage(sg); > > + sg_busy_energy = (group_util * > > sg->sge->cap_states[cap_idx].power) > > + > > >> SCHED_CAPACITY_SHIFT; > > + sg_idle_energy = ((SCHED_LOAD_SCALE-group_util) > > * sg->sge->idle_states[0].power) > > + > > >> SCHED_CAPACITY_SHIFT; > > + > > + total_energy += sg_busy_energy + sg_idle_energy; > > Should normalize group_util with the newly found capacity instead of > capacity_curr.
You're right. In the next patch when sched_group_energy() can be used for energy predictions based on usage deltas group_util should be normalized to the new capacity. Thanks for spotting this mistake. Morten -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/