On Wed, Jun 04, 2014 at 04:25:15PM +0800, Lai Jiangshan wrote:
> I think the following code works. (inspirited from the sched_ttwu_pending() 
> in migration_call().)
> 
> if p->on_rq == 0 && p->state == TASK_WAKING in __migrate_task() after this 
> patch,
> it means the cpuallowed is changed before __migrate_task() along with other 
> scheduler
> movements happens between sched_ttwu_pending() and __migrate_task().

Yes, this is better. Can you send a full patch with the below comment in
as well, then I can apply.

Thanks!

> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 268a45e..277f3bc 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4530,7 +4530,7 @@ int set_cpus_allowed_ptr(struct task_struct *p, const 
> struct cpumask *new_mask)
>               goto out;
>  
>       dest_cpu = cpumask_any_and(cpu_active_mask, new_mask);
> -     if (p->on_rq) {
> +     if (p->on_rq || p->state == TASK_WAKING) {
>               struct migration_arg arg = { p, dest_cpu };
>               /* Need help from migration thread: drop lock and wait. */
>               task_rq_unlock(rq, p, &flags);
> @@ -4656,6 +4656,7 @@ static int migration_cpu_stop(void *data)
>        * be on another cpu but it doesn't matter.
>        */
>       local_irq_disable();

        /*
         * We need to explicitly wake pending tasks before running
         * __migrate_task() such that we will not miss enforcing
         * cpus_allowed during wakeups, see set_cpus_allowed_ptr()'s
         * TASK_WAKING test.
         */

> +     sched_ttwu_pending();
>       __migrate_task(arg->task, raw_smp_processor_id(), arg->dest_cpu);
>       local_irq_enable();
>       return 0;

Attachment: pgp4cWfrdef6y.pgp
Description: PGP signature

Reply via email to