For the multiple hardware ports connect to a single DPDK port (mhpsdp), currently, there is no information to indicate the packet belongs to which hardware port.
Introduce a new phy affinity item in rte flow API, and the phy affinity value reflects the physical phy affinity of the received packets. Add the tx_phy_affinity setting in Tx queue API, the affinity value reflects packets be sent to which hardware port. Add the nb_phy_ports into device info and value greater than 0 mean that the number of physical ports connect to the DPDK port. While uses the phy affinity as a matching item in the flow, and sets the same affinity on the tx queue, then the packet can be sent from the same hardware port with received. RFC: http://patches.dpdk.org/project/dpdk/cover/20221221102934.13822-1-jiaw...@nvidia.com/ v3: * Update exception rule * Update the commit log * Add the description for PHY affinity and numbering definition * Add the number of physical ports into device info * Change the patch order v2: Update based on the comments Jiawei Wang (2): ethdev: introduce the PHY affinity field in Tx queue API ethdev: add PHY affinity match item app/test-pmd/cmdline.c | 100 ++++++++++++++++++++ app/test-pmd/cmdline_flow.c | 29 ++++++ app/test-pmd/config.c | 1 + devtools/libabigail.abignore | 5 + doc/guides/prog_guide/rte_flow.rst | 8 ++ doc/guides/rel_notes/release_23_03.rst | 6 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 17 ++++ lib/ethdev/rte_ethdev.h | 13 ++- lib/ethdev/rte_flow.c | 1 + lib/ethdev/rte_flow.h | 33 +++++++ 10 files changed, 212 insertions(+), 1 deletion(-) -- 2.18.1