> On Feb 16, 2021, at 4:04 AM, Peter Zijlstra <pet...@infradead.org> wrote:
> 
> On Tue, Feb 09, 2021 at 02:16:46PM -0800, Nadav Amit wrote:
>> @@ -894,17 +911,12 @@ EXPORT_SYMBOL(on_each_cpu_mask);
>> void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func,
>>                         void *info, bool wait, const struct cpumask *mask)
>> {
>> -    int cpu = get_cpu();
>> +    unsigned int scf_flags = SCF_RUN_LOCAL;
>> 
>> -    smp_call_function_many_cond(mask, func, info, wait, cond_func);
>> -    if (cpumask_test_cpu(cpu, mask) && cond_func(cpu, info)) {
>> -            unsigned long flags;
>> +    if (wait)
>> +            scf_flags |= SCF_WAIT;
>> 
>> -            local_irq_save(flags);
>> -            func(info);
>> -            local_irq_restore(flags);
>> -    }
>> -    put_cpu();
>> +    smp_call_function_many_cond(mask, func, info, scf_flags, cond_func);
>> }
>> EXPORT_SYMBOL(on_each_cpu_cond_mask);
> 
> You lost the preempt_disable() there, I've added it back:
> 
> ---
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -920,7 +920,9 @@ void on_each_cpu_cond_mask(smp_cond_func
>       if (wait)
>               scf_flags |= SCF_WAIT;
> 
> +     preempt_disable();
>       smp_call_function_many_cond(mask, func, info, scf_flags, cond_func);
> +     preempt_enable();
> }
> EXPORT_SYMBOL(on_each_cpu_cond_mask);

Indeed. I will add lockdep_assert_preemption_disabled() to
smp_call_function_many_cond() to prevent this mistake from reoccurring.

Reply via email to