On 25 September 2013 05:51, Jesse Gross <je...@nicira.com> wrote:
> On Mon, Sep 23, 2013 at 11:20 PM, Viresh Kumar <viresh.ku...@linaro.org> 
> wrote:

>> static inline void rcu_read_lock_bh(void)
>> {
>>         local_bh_disable();
>> #ifdef CONFIG_PREEMPT_RT_FULL
>>         rcu_read_lock();
>> #else
>>         __acquire(RCU_BH);
>>         rcu_lock_acquire(&rcu_bh_lock_map);
>>         rcu_lockdep_assert(!rcu_is_cpu_idle(),
>>                            "rcu_read_lock_bh() used illegally while idle");
>> #endif
>> }
>>
>> And rcu_read_lock() disables preemption (Atleast when
>> CONFIG_PREEMPT_RCU isn't enabled)... And which would
>> safeguard our counters from being accessed simultaneously..
>
> Yes and local_bh_disable() will disable preemption even for the cases
> where CONFIG_PREEMPT_RCU is set. That means that in the current code
> there are neither interrupts nor preemption for all packet processing.
>
>> And so that code wouldn't break for RT usecase..
>
> I'm not sure that follows. In the RT world, all of that basically gets
> converted into migrate_disable().

local_bh_disable() gets converted into migrate_disable() but not
rcu_read_lock (which is only called for RT). That disables preemption
even for RT..

> I looked into the handling of per-CPU on RT kernels some more and
> basically they assume that there is a lock protecting the data
> (possibly per-CPU). That is true in the case of loop checker in
> net/core/dev.c but not in the out-of-tree OVS code or in any of the
> stats tracking. So you would need some variation of your original
> patch.

Which per-cpu lock are you talking about in dev.c?
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to