2016-09-12 15:47 GMT+08:00 Vincent Guittot :
> When a task moves from/to a cfs_rq, we set a flag which is then used to
> propagate the change at parent level (sched_entity and cfs_rq) during
> next update. If the cfs_rq is throttled, the flag will stay pending until
> the cfs_rw is unthrottled.
>
>
On Thu, Sep 15, 2016 at 06:36:53PM +0100, Dietmar Eggemann wrote:
> > We did however loose a lot on why and how min(1, runnable_avg) is a
> > sensible thing to do...
>
> Do you refer to the big comment on top of this if condition in the old
> code in __update_group_entity_contrib()? The last two
On 15/09/16 16:14, Peter Zijlstra wrote:
> On Thu, Sep 15, 2016 at 02:11:49PM +0100, Dietmar Eggemann wrote:
>> On 12/09/16 08:47, Vincent Guittot wrote:
>
>>> +/* Take into account change of load of a child task group */
>>> +static inline void
>>> +update_tg_cfs_load(struct cfs_rq *cfs_rq, struc
On 15/09/16 15:31, Vincent Guittot wrote:
> On 15 September 2016 at 15:11, Dietmar Eggemann
> wrote:
[...]
>> Wasn't 'consuming <1' related to 'NICE_0_LOAD' and not
>> scale_load_down(gcfs_rq->tg->shares) before the rewrite of PELT (v4.2,
>> __update_group_entity_contrib())?
>
> Yes before the
On Thu, Sep 15, 2016 at 02:11:49PM +0100, Dietmar Eggemann wrote:
> On 12/09/16 08:47, Vincent Guittot wrote:
> > +/* Take into account change of load of a child task group */
> > +static inline void
> > +update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se)
> > +{
> > + struct cfs_
On 15 September 2016 at 16:43, Peter Zijlstra wrote:
> On Mon, Sep 12, 2016 at 09:47:49AM +0200, Vincent Guittot wrote:
>> +static inline void
>> +update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se)
>> +{
>> + struct cfs_rq *gcfs_rq = group_cfs_rq(se);
>> + long delta, load
On Mon, Sep 12, 2016 at 09:47:49AM +0200, Vincent Guittot wrote:
> +static inline void
> +update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se)
> +{
> + struct cfs_rq *gcfs_rq = group_cfs_rq(se);
> + long delta, load = gcfs_rq->avg.load_avg;
> +
> + /* If the load of group
On 15 September 2016 at 15:11, Dietmar Eggemann
wrote:
> On 12/09/16 08:47, Vincent Guittot wrote:
>> When a task moves from/to a cfs_rq, we set a flag which is then used to
>> propagate the change at parent level (sched_entity and cfs_rq) during
>> next update. If the cfs_rq is throttled, the fla
On 12/09/16 08:47, Vincent Guittot wrote:
> When a task moves from/to a cfs_rq, we set a flag which is then used to
> propagate the change at parent level (sched_entity and cfs_rq) during
> next update. If the cfs_rq is throttled, the flag will stay pending until
> the cfs_rw is unthrottled.
>
> F
On 15 September 2016 at 14:59, Peter Zijlstra wrote:
> On Mon, Sep 12, 2016 at 09:47:49AM +0200, Vincent Guittot wrote:
>> + /* If the load of group cfs_rq is null, the load of the
>> + * sched_entity will also be null so we can skip the formula
>> + */
>
> https://lkml.kernel.org/r/
On 15 September 2016 at 14:55, Peter Zijlstra wrote:
> On Mon, Sep 12, 2016 at 09:47:49AM +0200, Vincent Guittot wrote:
> +/* Take into account change of utilization of a child task group */
>> +static inline void
>> +update_tg_cfs_util(struct cfs_rq *cfs_rq, struct sched_entity *se)
>> +{
>> +
On Mon, Sep 12, 2016 at 09:47:49AM +0200, Vincent Guittot wrote:
+/* Take into account change of utilization of a child task group */
> +static inline void
> +update_tg_cfs_util(struct cfs_rq *cfs_rq, struct sched_entity *se)
> +{
> + struct cfs_rq *gcfs_rq = group_cfs_rq(se);
> + long de
On Mon, Sep 12, 2016 at 09:47:49AM +0200, Vincent Guittot wrote:
> + /* If the load of group cfs_rq is null, the load of the
> + * sched_entity will also be null so we can skip the formula
> + */
https://lkml.kernel.org/r/ca+55afyqyjerovmssosks7pesszbr4vnp-3quuwhqk4a4_j...@mail.gmail
13 matches
Mail list logo