Hi, > On Jul 23, 2020, at 9:57 AM, Li, Aubrey <aubrey...@linux.intel.com> wrote: > > On 2020/7/22 22:32, benbjiang(蒋彪) wrote: >> Hi, >> >>> On Jul 22, 2020, at 8:13 PM, Li, Aubrey <aubrey...@linux.intel.com> wrote: >>> >>> On 2020/7/22 16:54, benbjiang(蒋彪) wrote: >>>> Hi, Aubrey, >>>> >>>>> On Jul 1, 2020, at 5:32 AM, Vineeth Remanan Pillai >>>>> <vpil...@digitalocean.com> wrote: >>>>> >>>>> From: Aubrey Li <aubrey...@intel.com> >>>>> >>>>> - Don't migrate if there is a cookie mismatch >>>>> Load balance tries to move task from busiest CPU to the >>>>> destination CPU. When core scheduling is enabled, if the >>>>> task's cookie does not match with the destination CPU's >>>>> core cookie, this task will be skipped by this CPU. This >>>>> mitigates the forced idle time on the destination CPU. >>>>> >>>>> - Select cookie matched idle CPU >>>>> In the fast path of task wakeup, select the first cookie matched >>>>> idle CPU instead of the first idle CPU. >>>>> >>>>> - Find cookie matched idlest CPU >>>>> In the slow path of task wakeup, find the idlest CPU whose core >>>>> cookie matches with task's cookie >>>>> >>>>> - Don't migrate task if cookie not match >>>>> For the NUMA load balance, don't migrate task to the CPU whose >>>>> core cookie does not match with task's cookie >>>>> >>>>> Signed-off-by: Aubrey Li <aubrey...@linux.intel.com> >>>>> Signed-off-by: Tim Chen <tim.c.c...@linux.intel.com> >>>>> Signed-off-by: Vineeth Remanan Pillai <vpil...@digitalocean.com> >>>>> --- >>>>> kernel/sched/fair.c | 64 ++++++++++++++++++++++++++++++++++++++++---- >>>>> kernel/sched/sched.h | 29 ++++++++++++++++++++ >>>>> 2 files changed, 88 insertions(+), 5 deletions(-) >>>>> >>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >>>>> index d16939766361..33dc4bf01817 100644 >>>>> --- a/kernel/sched/fair.c >>>>> +++ b/kernel/sched/fair.c >>>>> @@ -2051,6 +2051,15 @@ static void task_numa_find_cpu(struct >>>>> task_numa_env *env, >>>>> if (!cpumask_test_cpu(cpu, env->p->cpus_ptr)) >>>>> continue; >>>>> >>>>> +#ifdef CONFIG_SCHED_CORE >>>>> + /* >>>>> + * Skip this cpu if source task's cookie does not match >>>>> + * with CPU's core cookie. >>>>> + */ >>>>> + if (!sched_core_cookie_match(cpu_rq(cpu), env->p)) >>>>> + continue; >>>>> +#endif >>>>> + >>>>> env->dst_cpu = cpu; >>>>> if (task_numa_compare(env, taskimp, groupimp, maymove)) >>>>> break; >>>>> @@ -5963,11 +5972,17 @@ find_idlest_group_cpu(struct sched_group *group, >>>>> struct task_struct *p, int this >>>>> >>>>> /* Traverse only the allowed CPUs */ >>>>> for_each_cpu_and(i, sched_group_span(group), p->cpus_ptr) { >>>>> + struct rq *rq = cpu_rq(i); >>>>> + >>>>> +#ifdef CONFIG_SCHED_CORE >>>>> + if (!sched_core_cookie_match(rq, p)) >>>>> + continue; >>>>> +#endif >>>>> + >>>>> if (sched_idle_cpu(i)) >>>>> return i; >>>>> >>>>> if (available_idle_cpu(i)) { >>>>> - struct rq *rq = cpu_rq(i); >>>>> struct cpuidle_state *idle = idle_get_state(rq); >>>>> if (idle && idle->exit_latency < min_exit_latency) { >>>>> /* >>>>> @@ -6224,8 +6239,18 @@ static int select_idle_cpu(struct task_struct *p, >>>>> struct sched_domain *sd, int t >>>>> for_each_cpu_wrap(cpu, cpus, target) { >>>>> if (!--nr) >>>>> return -1; >>>>> - if (available_idle_cpu(cpu) || sched_idle_cpu(cpu)) >>>>> - break; >>>>> + >>>>> + if (available_idle_cpu(cpu) || sched_idle_cpu(cpu)) { >>>>> +#ifdef CONFIG_SCHED_CORE >>>>> + /* >>>>> + * If Core Scheduling is enabled, select this cpu >>>>> + * only if the process cookie matches core cookie. >>>>> + */ >>>>> + if (sched_core_enabled(cpu_rq(cpu)) && >>>>> + p->core_cookie == cpu_rq(cpu)->core->core_cookie) >>>> Why not also add similar logic in select_idle_smt to reduce forced-idle? :) >>> We hit select_idle_smt after we scaned the entire LLC domain for idle cores >>> and idle cpus and failed,so IMHO, an idle smt is probably a good choice >>> under >>> this scenario. >> >> AFAIC, selecting idle sibling with unmatched cookie will cause unnecessary >> fored-idle, unfairness and latency, compared to choosing *target* cpu. > Choosing target cpu could increase the runnable task number on the target > runqueue, this > could trigger busiest->nr_running > 1 logic and makes the idle sibling trying > to pull but > not success(due to cookie not match). Putting task to the idle sibling is > relatively stable IMHO.
I’m afraid that *unsuccessful* pullings between smts would not result in unstableness, because the load-balance always do periodicly , and unsuccess means nothing happen. On the contrary, unmatched sibling tasks running concurrently could bring forced-idle to each other repeatedly, Which is more unstable, and more costly when pick_next_task for all siblings. In consideration of currently load-balance being not fully aware of core-scheduling, and can not improve the *unmatched sibling* case, the *find_idlest_** entry should try its best to avoid the case, IMHO. Also, just an advice and an option. :) Thx. Regards, Jiang > >> Besides, choosing *target* cpu may be more cache friendly. So IMHO, *target* >> cpu may be a better choice if cookie not match, instead of idle sibling. > I'm not sure if it's more cache friendly as the target is busy, and the > coming task > is a cookie unmatched task. > >> >>> >>>> >>>>> +#endif >>>>> + break; >>>>> + } >>>>> } >>>>> >>>>> time = cpu_clock(this) - time; >>>>> @@ -7609,8 +7634,9 @@ int can_migrate_task(struct task_struct *p, struct >>>>> lb_env *env) >>>>> * We do not migrate tasks that are: >>>>> * 1) throttled_lb_pair, or >>>>> * 2) cannot be migrated to this CPU due to cpus_ptr, or >>>>> - * 3) running (obviously), or >>>>> - * 4) are cache-hot on their current CPU. >>>>> + * 3) task's cookie does not match with this CPU's core cookie >>>>> + * 4) running (obviously), or >>>>> + * 5) are cache-hot on their current CPU. >>>>> */ >>>>> if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu)) >>>>> return 0; >>>>> @@ -7645,6 +7671,15 @@ int can_migrate_task(struct task_struct *p, struct >>>>> lb_env *env) >>>>> return 0; >>>>> } >>>>> >>>>> +#ifdef CONFIG_SCHED_CORE >>>>> + /* >>>>> + * Don't migrate task if the task's cookie does not match >>>>> + * with the destination CPU's core cookie. >>>>> + */ >>>>> + if (!sched_core_cookie_match(cpu_rq(env->dst_cpu), p)) >>>>> + return 0; >>>>> +#endif >>>>> + >>>>> /* Record that we found atleast one task that could run on dst_cpu */ >>>>> env->flags &= ~LBF_ALL_PINNED; >>>>> >>>>> @@ -8857,6 +8892,25 @@ find_idlest_group(struct sched_domain *sd, struct >>>>> task_struct *p, >>>>> p->cpus_ptr)) >>>>> continue; >>>>> >>>>> +#ifdef CONFIG_SCHED_CORE >>>>> + if (sched_core_enabled(cpu_rq(this_cpu))) { >>>>> + int i = 0; >>>>> + bool cookie_match = false; >>>>> + >>>>> + for_each_cpu(i, sched_group_span(group)) { >>>> Should we consider the p->cpus_ptr here? like, >>>> for_each_cpu_and(i, sched_group_span(group), >>>> p->cpus_ptr ) { >>> >>> This is already considered just above #ifdef CONFIG_SCHED_CORE, but not >>> included >>> in the patch file. >>> >>> Thanks, >>> -Aubrey >> >> The above consideration is, >> 8893 /* Skip over this group if it has no CPUs allowed */ >> 8894 if (!cpumask_intersects(sched_group_span(group), >> 8895 p->cpus_ptr)) >> 8896 continue; >> 8897 >> It only considers the case of *p is not allowed for the whole group*, which >> is not enough. >> If( cpumask_subset(p->cpus_ptr, sched_group_span(group)), the following >> sched_core_cookie_match() may choose a *wrong(not allowed)* cpu to match >> cookie. In that case, the matching result could be confusing and lead to >> wrong result. >> On the other hand, considering p->cpus_ptr here could reduce the loop times >> and cost, if cpumask_and(p->cpus_ptr, sched_group_span(group)) is the subset >> of sched_group_span(group). > > Though find_idlest_group_cpu() will check p->cpus_ptr again, I believe this > is a good catch and > should be fixed in the next iteration. > > Thanks, > -Aubrey