On 31 August 2013 06:27, Jesse Gross <je...@nicira.com> wrote: > On Wed, Aug 28, 2013 at 2:57 AM, Viresh Kumar <viresh.ku...@linaro.org> wrote:
Sorry for delayed response.. Was on holidays for a week and then was busy with CPUFreq bugs last week.. >> Which should have worked for both RT and non-RT kernels.. > > I think that will work for the loop checker but executing the actions > could be a very long code path - lots of actions, trips through the IP > stack, etc. It doesn't seem like a great idea to hold a spin lock for > this amount of time, particularly if the goal is preemptibility. Is this piece of code reentrant? I don't think so, otherwise there would have been races to update loop_counters.. And so spin_lock shouldn't be a problem for non-RT cases.. For RT spin-locks don't spin, unless we have raw versions of them, and are simply mutexes... And so I don't see any issues in case of RT as well atleast performance or RT behavior wise. >> There is only one place where they are using local_bh_disable() for >> RT kernel with following patch: > > I doubt that the solution would be to turn off bottom halves here. I > would look for the counter code and see what they do with it > specifically. Have you found anything? That's for non-RT, for RT local_bh_disable() does only this: void local_bh_disable(void) { migrate_disable(); current->softirq_nestcnt++; } And so it is similar to migrate_disable(), which wouldn't be sufficient for our case where we need to avoid races.. >>>>> Have you also audited the other use of per-CPU variables? >>>> >>>> Yes, I did. But couldn't find anything that would be broken with RT.. >>>> Per-cpu doesn't have a problem with RT, only the access to those >>>> can be a issue.. And if things are serialized enough, then there >>>> shouldn't be any issue, I hope. >>> >>> Can't you be interrupted/preempted while modifying the counters? >> >> Yes, you can be. But that will happen today (non-RT) as well without >> proper locking or something like local_bh_disable().. >> >> And I thought if we aren't doing local_bh_disable() at multiple places >> then proper locking is there with spin_lock_bh() or similar.. And I see >> such lockings in place at most of the places.. > > Most of the network stack operates with bottom halves disabled so > packets ingressing OVS from anywhere except userspace will already > have them turned off. Since we also turn off bottom halves with > packets coming from userspace, this is used as a synchronization > mechanism. Probably the same is still true with RT, otherwise I would have found few such patches in RT kernel.. > By the way, I don't have a lot of knowledge about the RT patchset, > particularly recent changes. You might want to ask someone with more > direct knowledge. I need somebody who is ready to look into out of kernel OVS code.. Don't know if RT guys would help me here.. But I believe my current code would work without much problems, unless there is something silly in it :) Thanks. _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev