Hello Ilya, On Mon, Jul 31, 2023 at 10:40 PM Ilya Maximets <i.maxim...@ovn.org> wrote: > On 6/21/23 16:43, David Marchand wrote: > > As reported by Ilya [1], unconditionally calling > > rte_flow_get_restore_info() impacts an application performance for drivers > > that do not provide this ops. > > It could also impact processing of packets that require no call to > > rte_flow_get_restore_info() at all. > > > > Register a dynamic mbuf flag when an application negotiates tunnel > > metadata delivery (calling rte_eth_rx_metadata_negotiate() with > > RTE_ETH_RX_METADATA_TUNNEL_ID). > > > > Drivers then advertise that metadata can be extracted by setting this > > dynamic flag in each mbuf. > > > > The application then calls rte_flow_get_restore_info() only when required. > > > > Link: > > http://inbox.dpdk.org/dev/5248c2ca-f2a6-3fb0-38b8-7f659bfa4...@ovn.org/ > > Signed-off-by: David Marchand <david.march...@redhat.com> > > Acked-by: Andrew Rybchenko <andrew.rybche...@oktetlabs.ru> > > Acked-by: Viacheslav Ovsiienko <viachesl...@nvidia.com> > > Tested-by: Ali Alnubani <alia...@nvidia.com> > > Acked-by: Ori Kam <or...@nvidia.com> > > --- > > Changes since RFC v3: > > - rebased on next-net, > > - sending as non RFC for CIs that skip RFC patches, > > > > Changes since RFC v2: > > - fixed crash introduced in v2 and removed unneeded argument to > > rte_flow_restore_info_dynflag_register(), > > > > Changes since RFC v1: > > - rebased, > > - updated vectorized datapath functions for net/mlx5, > > - moved dynamic flag register to rte_eth_rx_metadata_negotiate() and > > hid rte_flow_restore_info_dynflag_register() into ethdev internals, > > > > --- > > app/test-pmd/util.c | 9 +++-- > > drivers/net/mlx5/mlx5.c | 2 + > > drivers/net/mlx5/mlx5.h | 5 ++- > > drivers/net/mlx5/mlx5_flow.c | 47 +++++++++++++++++++++--- > > drivers/net/mlx5/mlx5_rx.c | 2 +- > > drivers/net/mlx5/mlx5_rx.h | 1 + > > drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 16 ++++---- > > drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 6 +-- > > drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 6 +-- > > drivers/net/mlx5/mlx5_trigger.c | 4 +- > > drivers/net/sfc/sfc_dp.c | 14 +------ > > lib/ethdev/rte_ethdev.c | 5 +++ > > lib/ethdev/rte_flow.c | 27 ++++++++++++++ > > lib/ethdev/rte_flow.h | 18 ++++++++- > > lib/ethdev/rte_flow_driver.h | 6 +++ > > lib/ethdev/version.map | 1 + > > 16 files changed, 128 insertions(+), 41 deletions(-) > > <snip> > > > diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h > > index 356b60f523..f9fb01b8a2 100644 > > --- a/lib/ethdev/rte_flow_driver.h > > +++ b/lib/ethdev/rte_flow_driver.h > > @@ -376,6 +376,12 @@ struct rte_flow_ops { > > const struct rte_flow_ops * > > rte_flow_ops_get(uint16_t port_id, struct rte_flow_error *error); > > > > +/** > > + * Register mbuf dynamic flag for rte_flow_get_restore_info. > > + */ > > +int > > +rte_flow_restore_info_dynflag_register(void); > > + > > Hi, David, others. > > Is there a reason to not expose this function to the application? > > The point is that application will likely want to know the value > of the flag before creating any devices. I.e. request it once > and use for all devices later without performing a call to an > external library (DPDK). In current implementation, application > will need to open some device first, and only then the result of > rte_flow_restore_info_dynflag() will become meaningful. > > There is no need to require application to call this function, > it can still be called from the rx negotiation API, but it would > be nice if application could know it beforehand, i.e. had control > over when the flag is actually becomes visible.
DPDK tries to register flags only when needed, as there is not a lot of space for dyn flags. Some drivers take some space and applications want some share too. DPDK can export the _register function for applications to call it regardless of what driver will be used later. Yet, I want to be sure why it matters in OVS context. Is it not enough resolving the flag (by calling rte_flow_restore_info_dynflag()) once rte_eth_rx_metadata_negotiate for tunnel metadata is called? Do you want to avoid an atomic store/load between OVS main thread and PMD threads? -- David Marchand