Vlad Buslov <vla...@mellanox.com> writes: > On Tue 22 Oct 2019 at 17:35, Marcelo Ricardo Leitner <mleit...@redhat.com> > wrote: >> On Tue, Oct 22, 2019 at 05:17:51PM +0300, Vlad Buslov wrote: >>> Currently, significant fraction of CPU time during TC filter allocation >>> is spent in percpu allocator. Moreover, percpu allocator is protected >>> with single global mutex which negates any potential to improve its >>> performance by means of recent developments in TC filter update API that >>> removed rtnl lock for some Qdiscs and classifiers. In order to >>> significantly improve filter update rate and reduce memory usage we >>> would like to allow users to skip percpu counters allocation for >>> specific action if they don't expect high traffic rate hitting the >>> action, which is a reasonable expectation for hardware-offloaded setup. >>> In that case any potential gains to software fast-path performance >>> gained by usage of percpu-allocated counters compared to regular integer >>> counters protected by spinlock are not important, but amount of >>> additional CPU and memory consumed by them is significant. >> >> Yes! >> >> I wonder how this can play together with conntrack offloading. With >> it the sw datapath will be more used, as a conntrack entry can only be >> offloaded after the handshake. That said, the host can have to >> process quite some handshakes in sw datapath. Seems OvS can then just >> not set this flag in act_ct (and others for this rule), and such cases >> will be able to leverage the percpu stats. Right? > > The flag is set per each actions instance so client can chose not to use > the flag in case-by-case basis. Conntrack use case requires further > investigation since I'm not entirely convinced that handling first few > packets in sw (before connection reaches established state and is > offloaded) warrants having percpu counter.
Hi Vlad, Did you consider using TCA_ROOT_FLAGS instead of adding another per-action 32-bit flag?