On 07/11/14 at 11:29pm, Joe Stringer wrote: > I'm skeptical of taking the ovs_lock(). The current patch performs this > key/mask cache construction inside ovs_lock() as part of the critical > section for flow install. If we perform this during flow_dump, but extend > the ovs_lock to the entire flow dump operation, that has the potential to > make things worse due to the added contention. If I follow correctly, the > mspin_lock() bottleneck after the current patch is applied is already > measuring contention on the ovs_mutex when revalidators attempt to delete > flows. That's with flow_dump as RCU.
We would actually only hold the lock while constructing each individual message but it is probably not worth it. A cmpxchg will be expensive as well. It seems like your idea of fixing up the original flow as provided by the user would be the most optimal solution. > Another sidenote: > > Looking at the code, OVS does not handle NLM_F_DUMP_INTR in user space yet > > and the kernel dump does not call genl_dump_check_consistent() yet to > > provide > > the flag. So what can currently happen even without your patch is that if > > the dump requires more than one Netlink message, the RCU critical section > > is left and thus a list deletion or insertion of a flow may occur in the > > middle of the dump and thus provide a slightly incorrect set of flows. > > > > Yes, I believe this is done deliberately for performance reasons. OVS > userspace doesn't assume that it's getting a consistent snapshot of the > flow table, it handles cases where flows are duplicated or missing in the > dump. An idea has been floated to keep track of flow adds/deletes in the > kernel to counteract this, but I got distracted by this idea first :-) Yes. Other code paths do the same thing. Which is why the dump-interrupted flag has been introduced to ease detection on the receiver side. _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev