Am 23.08.2012 16:14, schrieb p...@google.com: > From: Paul Turner <p...@google.com> > > Now that the machinery in place is in place to compute contributed load in a > bottom up fashion; replace the shares distribution code within update_shares() > accordingly.
[snip] > static int update_shares_cpu(struct task_group *tg, int cpu) > { > + struct sched_entity *se; > struct cfs_rq *cfs_rq; > unsigned long flags; > struct rq *rq; > > - if (!tg->se[cpu]) > - return 0; > - > rq = cpu_rq(cpu); > + se = tg->se[cpu]; > cfs_rq = tg->cfs_rq[cpu]; > > raw_spin_lock_irqsave(&rq->lock, flags); > > update_rq_clock(rq); > - update_cfs_load(cfs_rq, 1); > update_cfs_rq_blocked_load(cfs_rq, 1); > > - /* > - * We need to update shares after updating tg->load_weight in > - * order to adjust the weight of groups with long running tasks. > - */ > - update_cfs_shares(cfs_rq); > + if (se) { > + update_entity_load_avg(se, 1); > + /* > + * We can pivot on the runnable average decaying to zero for > + * list removal since the parent average will always be >= > + * child. > + */ > + if (se->avg.runnable_avg_sum) > + update_cfs_shares(cfs_rq); > + else > + list_del_leaf_cfs_rq(cfs_rq); The blocked load, which we decay from this function, is not part of se->avg.runnable_avg_sum. Is list removal a good idea while there might be blocked load? We only get here, because we are on that list... don't we end up with a wrong task group load then? Regards Jan > + } else { > + update_rq_runnable_avg(rq, rq->nr_running); > + } > > raw_spin_unlock_irqrestore(&rq->lock, flags); > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/