Hi, > -----Original Message----- > From: Ferruh Yigit <ferruh.yi...@amd.com> > Sent: Friday, February 10, 2023 3:45 AM > To: Jiawei(Jonny) Wang <jiaw...@nvidia.com>; Slava Ovsiienko > <viachesl...@nvidia.com>; Ori Kam <or...@nvidia.com>; NBU-Contact- > Thomas Monjalon (EXTERNAL) <tho...@monjalon.net>; > andrew.rybche...@oktetlabs.ru; Aman Singh <aman.deep.si...@intel.com>; > Yuying Zhang <yuying.zh...@intel.com> > Cc: dev@dpdk.org; Raslan Darawsheh <rasl...@nvidia.com> > Subject: Re: [PATCH v4 1/2] ethdev: introduce the PHY affinity field in Tx > queue > API > > On 2/3/2023 1:33 PM, Jiawei Wang wrote: > > When multiple physical ports are connected to a single DPDK port, > > (example: kernel bonding, DPDK bonding, failsafe, etc.), we want to > > know which physical port is used for Rx and Tx. > > > > I assume "kernel bonding" is out of context, but this patch concerns DPDK > bonding, failsafe or softnic. (I will refer them as virtual bonding > device.) >
''kernel bonding'' can be thought as Linux bonding. > To use specific queues of the virtual bonding device may interfere with the > logic of these devices, like bonding modes or RSS of the underlying devices. I > can see feature focuses on a very specific use case, but not sure if all > possible > side effects taken into consideration. > > > And although the feature is only relavent to virtual bondiong device, core > ethdev structures are updated for this. Most use cases won't need these, so is > there a way to reduce the scope of the changes to virtual bonding devices? > > > There are a few very core ethdev APIs, like: > rte_eth_dev_configure() > rte_eth_tx_queue_setup() > rte_eth_rx_queue_setup() > rte_eth_dev_start() > rte_eth_dev_info_get() > > Almost every user of ehtdev uses these APIs, since these are so fundemental I > am for being a little more conservative on these APIs. > > Every eccentric features are targetting these APIs first because they are > common and extending them gives an easy solution, but in long run making > these APIs more complex, harder to maintain and harder for PMDs to support > them correctly. So I am for not updating them unless it is a generic use case. > > > Also as we talked about PMDs supporting them, I assume your coming PMD > patch will be implementing 'tx_phy_affinity' config option only for mlx > drivers. > What will happen for other NICs? Will they silently ignore the config option > from user? So this is a problem for the DPDK application portabiltiy. > Yes, the PMD patch is for net/mlx5 only, the 'tx_phy_affinity' can be used for HW to choose an mapping queue with physical port. Other NICs ignore this new configuration for now, or we should add checking in queue setup? > > > As far as I understand target is application controlling which sub-device is > used > under the virtual bonding device, can you pleaes give more information why > this is required, perhaps it can help to provide a better/different solution. > Like adding the ability to use both bonding device and sub-device for data > path, > this way application can use whichever it wants. (this is just first solution > I > come with, I am not suggesting as replacement solution, but if you can > describe > the problem more I am sure other people can come with better solutions.) > For example: There're two physical ports (assume device interface: eth2, eth3), and bonded these two Devices into one interface (assume bond0). DPDK application probed/attached the bond0 only (dpdk port id:0), while sending traffic from dpdk port, We want to know the packet be sent into which physical port (eth2 or eth3). With the new configuration, the queue could be configured with underlay device, Then DPDK application could send the traffic into correct queue as desired. Add all devices into DPDK, means that need to create multiple RX/TX Queue resources on it. > And isn't this against the applicatio transparent to underneath device being > bonding device or actual device? > > > > This patch maps a DPDK Tx queue with a physical port, by adding > > tx_phy_affinity setting in Tx queue. > > The affinity number is the physical port ID where packets will be > > sent. > > Value 0 means no affinity and traffic could be routed to any connected > > physical ports, this is the default current behavior. > > > > The number of physical ports is reported with rte_eth_dev_info_get(). > > > > The new tx_phy_affinity field is added into the padding hole of > > rte_eth_txconf structure, the size of rte_eth_txconf keeps the same. > > An ABI check rule needs to be added to avoid false warning. > > > > Add the testpmd command line: > > testpmd> port config (port_id) txq (queue_id) phy_affinity (value) > > > > For example, there're two physical ports connected to a single DPDK > > port (port id 0), and phy_affinity 1 stood for the first physical port > > and phy_affinity 2 stood for the second physical port. > > Use the below commands to config tx phy affinity for per Tx Queue: > > port config 0 txq 0 phy_affinity 1 > > port config 0 txq 1 phy_affinity 1 > > port config 0 txq 2 phy_affinity 2 > > port config 0 txq 3 phy_affinity 2 > > > > These commands config the Tx Queue index 0 and Tx Queue index 1 with > > phy affinity 1, uses Tx Queue 0 or Tx Queue 1 send packets, these > > packets will be sent from the first physical port, and similar with > > the second physical port if sending packets with Tx Queue 2 or Tx > > Queue 3. > > > > Signed-off-by: Jiawei Wang <jiaw...@nvidia.com> > > --- snip