On 1/12/2022 7:23 AM, Nobuhiro MIKI wrote:
Users can create the desired number of RxQ and TxQ in DPDK. For
example, if the number of RxQ = 2 and the number of TxQ = 5,
a total of 8 file descriptors will be created for a tap device,
including RxQ, TxQ, and one for keepalive. The RxQ and TxQ
with the same ID are paired by dup(2).

In this scenario, Kernel will have 3 RxQ where packets are
incoming but not read. The reason for this is that there are only
2 RxQ that are polled by DPDK, while there are 5 queues in Kernel.
This patch add a checking if DPDK has appropriate numbers of
queues to avoid unexpected packet drop.

Signed-off-by: Nobuhiro MIKI <nm...@yahoo-corp.jp>


It makes sense to add this check, since the driver logic seem already has
the assumption that Rx & Tx queue numbers will be same.

But can you please update the tap documentation for this limitation/restriction?
'doc/guides/nics/tap.rst'.


---
v2: fix commit message

I had first discussed this issue in OVS [1], but changed my mind
that a fix in DPDK would be more appropriate.
[1]: https://mail.openvswitch.org/pipermail/ovs-dev/2021-November/389690.html
---
  drivers/net/tap/rte_eth_tap.c | 8 ++++++++
  1 file changed, 8 insertions(+)

diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 5bb472f1a6..02eb311e09 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -940,6 +940,14 @@ tap_dev_configure(struct rte_eth_dev *dev)
                        RTE_PMD_TAP_MAX_QUEUES);
                return -1;
        }
+       if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) {
+               TAP_LOG(ERR,
+                       "%s: number of rx queues %d must be equal to number of tx 
queues %d",
+                       dev->device->name,
+                       dev->data->nb_rx_queues,
+                       dev->data->nb_tx_queues);
+               return -1;
+       }
TAP_LOG(INFO, "%s: %s: TX configured queues number: %u",
                dev->device->name, pmd->name, dev->data->nb_tx_queues);

Reply via email to