On Fri, Jun 17, 2016 at 02:01:40PM +0200, Peter Zijlstra wrote:
> @@ -8219,6 +8254,19 @@ static int cpu_cgroup_can_attach(struct
>               if (task->sched_class != &fair_sched_class)
>                       return -EINVAL;
>  #endif
> +             /*
> +              * Serialize against wake_up_new_task() such
> +              * that if its running, we're sure to observe
> +              * its full state.
> +              */
> +             raw_spin_unlock_wait(&task->pi_lock);
> +             /*
> +              * Avoid calling sched_move_task() before wake_up_new_task()
> +              * has happened. This would lead to problems with PELT. See
> +              * XXX.
> +              */
> +             if (task->state == TASK_NEW)
> +                     return -EINVAL;
>       }
>       return 0;
>  }

So I think that's backwards; we want:

                if (task->state == TASK_NEW)
                        return -EINVAL;
                raw_spin_unlock_wait(&task->pi_lock);

Because failing the attach is 'safe', but if we do not observe NEW we
must be absolutely sure to observe the full wakeup_new.

But since its not critical I'll change it to more obvious code:

                raw_spin_lock_irq(&task->pi_lock);
                if (task->state == TASK_NEW)
                        ret = -EINVAL;
                raw_spin_unlock_irq(&task->pi_lock);

Reply via email to