On Fri, Nov 20 2020 at 01:26, Frederic Weisbecker wrote:
> On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
>> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
>> +{
>> +    unsigned long flags;
>> +    int newcnt;
>> +
>> +    WARN_ON_ONCE(in_hardirq());
>> +
>> +    /* First entry of a task into a BH disabled section? */
>> +    if (!current->softirq_disable_cnt) {
>> +            if (preemptible()) {
>> +                    local_lock(&softirq_ctrl.lock);
>> +                    rcu_read_lock();
>
> Ah you lock RCU because local_bh_disable() implies it and
> since it doesn't disable preemption anymore, you must do it
> explicitly?
>
> Perhaps local_lock() should itself imply rcu_read_lock() ?

It's really only required for local_bh_disable(). Lemme add a comment.

>> +            } else {
>> +                    DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>> +            }
>> +    }
>> +
>> +    preempt_disable();
>
> Do you really need to disable preemption here? Migration is disabled by 
> local_lock()
> and I can't figure out a scenario where the below can conflict with a
> preempting task.

Indeed it's pointless.

>> +    /*
>> +     * Track the per CPU softirq disabled state. On RT this is per CPU
>> +     * state to allow preemption of bottom half disabled sections.
>> +     */
>> +    newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);
>
> __this_cpu_add_return() ?

Yep.

Thanks,

        tglx

Reply via email to