Hi Jiawei,

> -----Original Message-----
> From: Jiawei(Jonny) Wang <jiaw...@nvidia.com>
> Sent: Wednesday, 21 December 2022 12:30
> 
> For the multiple hardware ports connect to a single DPDK port (mhpsdp),
> the previous patch introduces the new rte flow item to match the port
> affinity of the received packets.
> 
> This patch adds the tx_affinity setting in Tx queue API, the affinity value
> reflects packets be sent to which hardware port.
> 
> Adds the new tx_affinity field into the padding hole of rte_eth_txconf
> structure, the size of rte_eth_txconf keeps the same. Adds a suppress
> type for structure change in the ABI check file.
> 
> This patch adds the testpmd command line:
> testpmd> port config (port_id) txq (queue_id) affinity (value)
> 
> For example, there're two hardware ports connects to a single DPDK
> port (port id 0), and affinity 1 stood for hard port 1 and affinity
> 2 stood for hardware port 2, used the below command to config
> tx affinity for each TxQ:
>       port config 0 txq 0 affinity 1
>       port config 0 txq 1 affinity 1
>       port config 0 txq 2 affinity 2
>       port config 0 txq 3 affinity 2
> 
> These commands config the TxQ index 0 and TxQ
 index 1 with affinity 1,
> uses TxQ 0 or TxQ 1 send packets, these packets will be sent from the
> hardware port 1, and similar with hardware port 2 if sending packets
> with TxQ 2 or TxQ 3.
> 
> Signed-off-by: Jiawei Wang <jiaw...@nvidia.com>
> ---

Acked-by: Ori Kam <or...@nvidia.com>
Best,
Ori

Reply via email to