Thanks for the review, On 10 July 2014 21:13, Thomas Graf <tg...@noironetworks.com> wrote:
> If I understand the code correctly the gain is only visible on > consecutive dumps of the same flow. How about constructing the cache > when you require it for the first time? That avoids the cost on flow > setup. Correct. My main concern with doing it the first time it is required, is how it's synchronised. Flow dump is RCU. I don't really know what the threadsafety requirements are for this code, or what options are available to address this. Is it viable to check the pointer is NULL, allocate the cache, perform an atomic swap, and free the (potentially NULL) returned pointer? The alternative that I mentioned below the commit message is that we directly copy the key/mask from the original netlink message, and adjust the mask for any fields that have wildcarded bits where the kernel only supports full masking. This should be cheaper than converting to sw_flow and back again as this patch does currently. Furthermore, it shouldn't require locking, so the critical section can remain the same size as currently. > /* Called with ovs_mutex or RCU read lock. */ > > static int ovs_flow_cmd_fill_match(struct datapath *dp, > > const struct sw_flow *flow, > > + const struct sk_buff *nl_cache, > > struct sk_buff *skb) > > { > > struct nlattr *nla; > > int err; > > > > + if (skb_tailroom(skb) < ovs_nl_match_size()) > > + goto nla_put_failure; > > The check is not wrong but it would be semantically more correct to > put it into the branch below as the nla_ API does the tailroom check > as well. > I had considered creating a separate patch that adds this type of check to each of the new fill_* functions, as they could detect the lack of space much earlier. But I guess that this case isn't common enough to worry about this. I'll fix it up as you requested. _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev