On Fri, Aug 28, 2020 at 03:51:09PM -0400, Julien Desfossez wrote:
> +     smt_weight = cpumask_weight(smt_mask);

> +             for_each_cpu_wrap_or(i, smt_mask, cpumask_of(cpu), cpu) {
> +                     struct rq *rq_i = cpu_rq(i);
> +                     struct task_struct *p;
> +
> +                     /*
> +                      * During hotplug online a sibling can be added in
> +                      * the smt_mask * while we are here. If so, we would
> +                      * need to restart selection by resetting all over.
> +                      */
> +                     if (unlikely(smt_weight != cpumask_weight(smt_mask)))
> +                             goto retry_select;

cpumask_weigt() is fairly expensive, esp. for something that should
'never' happen.

What exactly is the race here?

We'll update the cpu_smt_mask() fairly early in secondary bringup, but
where does it become a problem?

The moment the new thread starts scheduling it'll block on the common
rq->lock and then it'll cycle task_seq and do a new pick.

So where do things go side-ways?

Can we please split out this hotplug 'fix' into a separate patch with a
coherent changelog.

Reply via email to