On Mon, Feb 17, 2014 at 04:11:09PM +0800, Michael wang wrote:
> BTW, I reproduced it by steps:
> 1. change current to RT
> 2. move to a different depth cpu-cgroup
> 3. change it back to FAIR
> 
> Seems like it was caused by that RT has no task_move_group() implemented
> which could maintain depth, and that lead to a wrong depth after switched
> back to FAIR...


> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 235cfa7..4445e56 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7317,7 +7317,11 @@ static void switched_from_fair(struct rq *rq, struct 
> task_struct *p)
>   */
>  static void switched_to_fair(struct rq *rq, struct task_struct *p)
>  {
> -     if (!p->se.on_rq)
> +     struct sched_entity *se = &p->se;
> +#ifdef CONFIG_FAIR_GROUP_SCHED
> +     se->depth = se->parent ? se->parent->depth + 1 : 0;
> +#endif
> +     if (!se->on_rq)
>               return;
>  
>       /*

Yes indeed. My first idea yesterday was to put it in set_task_rq() to be
absolutely sure we catch all; but if this is sufficient its better.

Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to