On 5/6/22 01:00, Pankaj Gupta wrote:
vmxnet3 version 6 supports some new features, including but
not limited to:
- Increased max MTU up to 9190
- Increased max number of queues, both for Rx and Tx
- Removes power-of-two limitations
- Extended interrupt structures, required implementation for
   additional number of queues

Tested, using testpmd, for different hardware version on
ESXi 7.0 Update 2.

Signed-off-by: Pankaj Gupta <pagu...@vmware.com>
Reviewed-by: Jochen Behrens <jbehr...@vmware.com>

[snip]

@@ -1377,9 +1428,30 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
                     struct rte_eth_dev_info *dev_info)
  {
        struct vmxnet3_hw *hw = dev->data->dev_private;
+       int queues = 0;
+
+       if (VMXNET3_VERSION_GE_6(hw)) {
+               VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
+                                      VMXNET3_CMD_GET_MAX_QUEUES_CONF);
+               queues = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
+
+               if (queues > 0) {
+#ifndef MIN
+#define MIN(x, y) (((x) < (y)) ? (x) : (y))
+#endif

checkpatches.sh produces a warning here. Can we use RTE_MIN()
instead below?

+                       dev_info->max_rx_queues =
+                         MIN(VMXNET3_EXT_MAX_RX_QUEUES, ((queues >> 8) & 
0xff));
+                       dev_info->max_tx_queues =
+                         MIN(VMXNET3_EXT_MAX_TX_QUEUES, (queues & 0xff));
+               } else {
+                       dev_info->max_rx_queues = VMXNET3_MAX_RX_QUEUES;
+                       dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
+               }
+       } else {
+               dev_info->max_rx_queues = VMXNET3_MAX_RX_QUEUES;
+               dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
+       }
- dev_info->max_rx_queues = VMXNET3_MAX_RX_QUEUES;
-       dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
        dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
        dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
        dev_info->min_mtu = VMXNET3_MIN_MTU;

[snip]

Reply via email to