Hi Draiusz, > -----Original Message----- > From: Dariusz Sosnowski <dsosnow...@nvidia.com> > Sent: Wednesday, January 31, 2024 11:35 AM > > This patch reworks the async flow API functions called in data path, > to reduce the overhead during flow operations at the library level. > Main source of the overhead was indirection and checks done while > ethdev library was fetching rte_flow_ops from a given driver. > > This patch introduces rte_flow_fp_ops struct which holds callbacks > to driver's implementation of fast path async flow API functions. > Each driver implementing these functions must populate flow_fp_ops > field inside rte_eth_dev structure with a reference to > its own implementation. > By default, ethdev library provides dummy callbacks with > implementations returning ENOSYS. > Such design provides a few assumptions: > > - rte_flow_fp_ops struct for given port is always available. > - Each callback is either: > - Default provided by library. > - Set up by driver. > > As a result, no checks for availability of the implementation > are needed at library level in data path. > Any library-level validation checks in async flow API are compiled > if and only if RTE_FLOW_DEBUG macro is defined. > > These changes apply only to the following API functions: > > - rte_flow_async_create() > - rte_flow_async_create_by_index() > - rte_flow_async_actions_update() > - rte_flow_async_destroy() > - rte_flow_push() > - rte_flow_pull() > - rte_flow_async_action_handle_create() > - rte_flow_async_action_handle_destroy() > - rte_flow_async_action_handle_update() > - rte_flow_async_action_handle_query() > - rte_flow_async_action_handle_query_update() > - rte_flow_async_action_list_handle_create() > - rte_flow_async_action_list_handle_destroy() > - rte_flow_async_action_list_handle_query_update() > > This patch also adjusts the mlx5 PMD to the introduced flow API changes. > > Signed-off-by: Dariusz Sosnowski <dsosnow...@nvidia.com> > --- > v2: > - Fixed mlx5 PMD build issue with older versions of rdma-core. > ---
Acked-by: Ori Kam <or...@nvidia.com> Best, Ori