On 01/14/2013 09:59 PM, Morten Rasmussen wrote: > On Fri, Jan 11, 2013 at 03:30:30AM +0000, Alex Shi wrote: >> On 01/10/2013 07:40 PM, Morten Rasmussen wrote: >>>>> #undef P64 >>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >>>>> index ee015b8..7bfbd69 100644 >>>>> --- a/kernel/sched/fair.c >>>>> +++ b/kernel/sched/fair.c >>>>> @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct >>>>> cfs_rq *cfs_rq, int force_update) >>>>> >>>>> static inline void update_rq_runnable_avg(struct rq *rq, int runnable) >>>>> { >>>>> + u32 period; >>>>> __update_entity_runnable_avg(rq->clock_task, &rq->avg, runnable); >>>>> __update_tg_runnable_avg(&rq->avg, &rq->cfs); >>>>> + >>>>> + period = rq->avg.runnable_avg_period ? rq->avg.runnable_avg_period : 1; >>>>> + rq->util = rq->avg.runnable_avg_sum * 100 / period; >>> The existing tg->runnable_avg and cfs_rq->tg_runnable_contrib variables >>> both holds >>> rq->avg.runnable_avg_sum / rq->avg.runnable_avg_period scaled by >>> NICE_0_LOAD (1024). Why not use one of the existing variables instead of >>> introducing a new one? >> >> we want to a rq variable that reflect the utilization of the cpu, not of >> the tg > > It is the same thing for the root tg. You use exactly the same variables > for calculating rq->util as is used to calculate both tg->runnable_avg and > cfs_rq->tg_runnable_contrib in __update_tg_runnable_avg(). The only > difference is that you scale by 100 while __update_tg_runnable_avg() > scale by NICE_0_LOAD.
yes, the root tg->runnable_avg has same meaningful, but normal tg not, and more important it is hidden by CONFIG_FAIR_GROUP_SCHED, -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/