02/02/2023 10:28, Andrew Rybchenko: > On 2/1/23 18:50, Jiawei(Jonny) Wang wrote: > > From: Andrew Rybchenko <andrew.rybche...@oktetlabs.ru> > >> On 1/30/23 20:00, Jiawei Wang wrote: > >>> Adds the new tx_phy_affinity field into the padding hole of > >>> rte_eth_txconf structure, the size of rte_eth_txconf keeps the same. > >>> Adds a suppress type for structure change in the ABI check file. > >>> > >>> This patch adds the testpmd command line: > >>> testpmd> port config (port_id) txq (queue_id) phy_affinity (value) > >>> > >>> For example, there're two hardware ports 0 and 1 connected to a single > >>> DPDK port (port id 0), and phy_affinity 1 stood for hardware port 0 > >>> and phy_affinity 2 stood for hardware port 1, used the below command > >>> to config tx phy affinity for per Tx Queue: > >>> port config 0 txq 0 phy_affinity 1 > >>> port config 0 txq 1 phy_affinity 1 > >>> port config 0 txq 2 phy_affinity 2 > >>> port config 0 txq 3 phy_affinity 2 > >>> > >>> These commands config the TxQ index 0 and TxQ index 1 with phy > >>> affinity 1, uses TxQ 0 or TxQ 1 send packets, these packets will be > >>> sent from the hardware port 0, and similar with hardware port 1 if > >>> sending packets with TxQ 2 or TxQ 3. > >> > >> Frankly speaking I dislike it. Why do we need to expose it on generic > >> ethdev > >> layer? IMHO dynamic mbuf field would be a better solution to control Tx > >> routing to a specific PHY port.
The design of this patch is to map a queue of the front device with an underlying port. This design may be applicable to several situations, including DPDK bonding PMD, or Linux bonding connected to a PMD. The default 0, meaning the queue is not mapped to anything (no change). If the affinity is higher than 0, then the queue can be configured as desired. Then if an application wants to send a packet to a specific underlying port, it just has to send to the right queue. Functionnaly, mapping the queue, or setting the port in mbuf (your proposal) are the same. The advantages of the queue mapping are: - faster to use a queue than filling mbuf field - optimization can be done at queue setup [...] > Why are these queues should be visible to DPDK application? > Nobody denies you to create many HW queues behind one ethdev > queue. Of course, there questions related to descriptor status > API in this case, but IMHO it would be better than exposing > these details to an application level. Why not mapping the queues if application requires these details? > >> IMHO, we definitely need dev_info information about a number of physical > >> ports behind. Yes dev_info would be needed. > >> Advertising value greater than 0 should mean that PMD supports > >> corresponding mbuf dynamic field to contol ongoing physical port on Tx (or > >> should just reject packets on prepare which try to specify outgoing phy > >> port > >> otherwise). In the same way the information may be provided on Rx. > > > > See above, I think phy affinity is Queue level not for each packet. > > > >> I'm OK to have 0 as no phy affinity value and greater than zero as > >> specified phy > >> affinity. I.e. no dynamic flag is required. > > > > Thanks for agreement. > > > >> Also I think that order of patches should be different. > >> We should start from a patch which provides dev_info and flow API matching > >> and action should be in later patch. > > > > OK.