On Mon, Oct 26, 2015 at 8:48 PM, Joe Stringer <joestrin...@nicira.com> wrote: > On 26 October 2015 at 08:32, Andrey Korolyov <and...@xdel.ru> wrote: >> Hi, >> >> As far as I can see, the default 200k limit for in-datapath rules >> working not exactly as suggested, because dpctl shows far lower number >> of the active flows and with max-idle for 2.5h: >> >> system@ovs-system: >> lookups: hit:176489248433 missed:29538996446 lost:46962 >> flows: 6462 >> masks: hit:906990269299 total:5 hit/pkt:4.40 >> >> >> Stdev of flow# is about ten percent within a minute, so OVS behaving >> slightly different than it should by a bare look at the code. Can >> anyone please suggest a way to change an eviction rate so datapath >> will hold at least one tenth of amount of prefixes from the limit, or >> it is intentional to hold in-dp flow number that low? Indeed there is >> a small tail with low hit-rate but most of those subnets are >> definitely used more frequently than max-idle. > > If your flow table isn't sufficiently complex, then there may be no > reason for OVS userspace to generate more datapath flows. > > If traffic isn't regularly hitting your flows, then OVS will evict > those flows from the datapath. > > If OVS is not able to validate all datapath flows within a second, > then it will reduce the number of datapath flows that it maintains. > > There have been significant improvements in how this works in v2.1.x > (with threaded revalidation) and in v2.4.x (with UFID), so if you're > not already running 2.4 or later, I suggest updating. > > Since you're interested in how datapath flow management works, you may > get more insight from "Revaliwhat? Keeping kernel flows fresh" from > OVS fall conference 2014. It's on the OVS website.
Thanks for the reference Joe, tables are a bit unusual - there is a dozen of chained tables with approximately half of million (full-view lookup table) rules in last one, so I suppose that heuristics could go wrong just because of this fact. This is very close 2.3-HEAD, will check if 2.4 would improve the situation - actually, the switchd just eats too much for scattered lookups like one I described. 2.1 improvement it the past was indeed quite fantastic. _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss