Hi Steve, On 7 February 2015 at 05:09, Steven Rostedt <rost...@goodmis.org> wrote: > On Thu, 5 Feb 2015 23:59:33 +0800 >> + >> + if (task_running(rq, p) && >> + cpumask_test_cpu(task_cpu(p), new_mask) && > > Why the check for task_cpu being in new_mask?
If the current cpu of this task is not in the new_mask, it will get migrated by set_cpus_allowed_ptr(), so we don't need to resched. > >> + cpupri_find(&rq->rd->cpupri, p, NULL)) { >> + /* >> + * At this point, current task gets migratable most >> + * likely due to the change of its affinity, let's >> + * figure out if we can migrate it. >> + * >> + * Is there any task with the same priority as that >> + * of current task? If found one, we should resched. >> + * NOTE: The target may be unpushable. >> + */ >> + if (p->prio == rq->rt.highest_prio.next) { >> + /* One target just in pushable_tasks list. */ >> + requeue_task_rt(rq, p, 0); >> + preempt_push = 1; >> + } else if (rq->rt.rt_nr_total > 1) { >> + struct task_struct *next; >> + >> + requeue_task_rt(rq, p, 0); >> + next = peek_next_task_rt(rq); >> + if (next != p && next->prio == p->prio) >> + preempt_push = 1; >> + } >> + } else if (!task_running(rq, p)) >> + direct_push = 1; > > We could avoid the second check (!task_running()) by splitting up the > first if: ok, I'll adjust it. > > if (task_running(rq, p)) { > if (cpumask_test_cpu() && cpupri_find()) { > } > } else { > direct push = 1 > > Also, is the copy of cpus_allowed only done so that cpupri_find is > called? If so maybe move it in there too: > > if (task_running(rq, p)) { > if (!cpumask_test_cpu()) > goto update; > > cpumask_copy(&p->cpus_allowed, new_mask); > p->nr_cpus_allowed = new_weight; > > if (!cpupri_find()) > goto update; > > [...] > > This way we avoid the double copy of cpumask unless we truly need to do > it. The new_mask can also be used by direct_push case, so I think it's ok. > >> + } >> >> /* >> * Only update if the process changes its state from whether it >> * can migrate or not. >> */ >> - if ((p->nr_cpus_allowed > 1) == (weight > 1)) >> - return; >> - >> - rq = task_rq(p); >> + if ((old_weight > 1) == (new_weight > 1)) >> + goto out; >> >> /* >> * The process used to be able to migrate OR it can now migrate >> */ >> - if (weight <= 1) { >> + if (new_weight <= 1) { >> if (!task_current(rq, p)) >> dequeue_pushable_task(rq, p); >> BUG_ON(!rq->rt.rt_nr_migratory); >> @@ -1919,6 +1970,15 @@ static void set_cpus_allowed_rt(struct task_struct *p, >> } >> >> update_rt_migration(&rq->rt); >> + >> +out: >> + BUG_ON(direct_push == 1 && preempt_push == 1); > > Do we really need this bug on? > >> + >> + if (direct_push) >> + push_rt_tasks(rq); >> + >> + if (preempt_push) > > We could make that an "else if" if they really are mutually exclusive. > I'll fix those things, and resend another version. Thanks, Xunlei -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/