Hi, Konstantin

Thanks for your comments! 
Do you means that tx_q is must less than rx_q when the SRIOV is active and if 
not, the application case will not be supported?  
Do you think my patch will cause to  2 (or more cores)  could try to TX packets 
through the same TX queue? And as far as I know, the way of core using tx_q 
queue is depend on the application (e.g. in l3fwd tx_q equal to number of core) 
and multi core use same tx_q is not suggested for locker is needed in this 
situation.  So why do you think my patch will lead to multi core using same 
queue?

Yanglong Wu

 
-----Original Message-----
From: Ananyev, Konstantin 
Sent: Monday, January 8, 2018 7:55 PM
To: Wu, Yanglong <yanglong...@intel.com>; dev@dpdk.org
Subject: RE: [PATCH v5] net/ixgbe: fix l3fwd start failed on



> -----Original Message-----
> From: Wu, Yanglong
> Sent: Monday, January 8, 2018 3:06 AM
> To: dev@dpdk.org
> Cc: Ananyev, Konstantin <konstantin.anan...@intel.com>; Wu, Yanglong 
> <yanglong...@intel.com>
> Subject: [PATCH v5] net/ixgbe: fix l3fwd start failed on
> 
> L3fwd start failed on PF, for tx_q check failed.
> That occurred when the SRIOV is active and tx_q > rx_q.
> The tx_q is equal to nb_q_per_pool. The number of nb_q_per_pool should 
> equeal to max number of queues supported by HW not nb_rx_q.

But then 2 (or more cores)  could try to TX packets through the same TX queue?
Why not just fil to start gracefully (call rte_exit() or so) if such situation 
occurred?
Konstantin

> 
> Fixes: 27b609cbd1c6 (ethdev: move the multi-queue mode check to 
> specific drivers)
> 
> Signed-off-by: Yanglong Wu <yanglong...@intel.com>
> ---
> v5:
> Rework according to comments
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c 
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> index ff19a564a..baaeee5d9 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -95,6 +95,9 @@
>  /* Timer value included in XOFF frames. */  #define IXGBE_FC_PAUSE 
> 0x680
> 
> +/*Default value of Max Rx Queue*/
> +#define IXGBE_MAX_RX_QUEUE_NUM 128
> +
>  #define IXGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
>  #define IXGBE_LINK_UP_CHECK_TIMEOUT   1000 /* ms */
>  #define IXGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
> @@ -2194,9 +2197,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, 
> uint16_t nb_rx_q)
>               return -EINVAL;
>       }
> 
> -     RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = nb_rx_q;
> -     RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx = pci_dev->max_vfs * nb_rx_q;
> -
> +     RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =
> +             IXGBE_MAX_RX_QUEUE_NUM / RTE_ETH_DEV_SRIOV(dev).active;
> +     RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =
> +             pci_dev->max_vfs * RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
>       return 0;
>  }
> 
> --
> 2.11.0

Reply via email to