mask is built in build_balance_mask() by for_each_cpu(i, sg_span), so
it must be a subset of sched_group_span(sg). Though cpumask_first_and
doesn't lead to a wrong result of balance cpu, it is pointless to do
cpumask_and again.

Signed-off-by: Barry Song <song.bao....@hisilicon.com>
Reviewed-by: Valentin Schneider <valentin.schnei...@arm.com>
---
 -v2: add reviewed-by of Valentin, thanks!

 kernel/sched/topology.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index f2066d682cd8..d1aec244c027 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -934,7 +934,7 @@ static void init_overlap_sched_group(struct sched_domain 
*sd,
        int cpu;
 
        build_balance_mask(sd, sg, mask);
-       cpu = cpumask_first_and(sched_group_span(sg), mask);
+       cpu = cpumask_first(mask);
 
        sg->sgc = *per_cpu_ptr(sdd->sgc, cpu);
        if (atomic_inc_return(&sg->sgc->ref) == 1)
-- 
2.25.1

Reply via email to