[dpdk-dev] [PATCH v2] stack: fix reload head when pop fails
The previous commit 18effad9cfa7 ("stack: reload head when pop fails") only changed C11 implementation, not generic implementation. List head must be loaded right before continue (when failed to find the new head). Without this, one thread might keep trying and failing to pop items without ever loading the new correct head. Fixes: 3340202f5954 ("stack: add lock-free implementation") Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- v2: * rebase * update commit log + remove invalid CC email address lib/stack/rte_stack_lf_generic.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/lib/stack/rte_stack_lf_generic.h b/lib/stack/rte_stack_lf_generic.h index 4850a05ee7..7fa29cedb2 100644 --- a/lib/stack/rte_stack_lf_generic.h +++ b/lib/stack/rte_stack_lf_generic.h @@ -128,8 +128,10 @@ __rte_stack_lf_pop_elems(struct rte_stack_lf_list *list, /* If NULL was encountered, the list was modified while * traversing it. Retry. */ - if (i != num) + if (i != num) { + old_head = list->head; continue; + } new_head.top = tmp; new_head.cnt = old_head.cnt + 1; -- 2.17.1
[dpdk-dev] [PATCH] net/ixgbe: fix RxQ/TxQ release
On the vector implementation, during the tear-down, the mbufs not drained in the RxQ and TxQ are freed based on an algorithm which supposed that the number of descriptors is a power of 2 (max_desc). Based on this hypothesis, this algorithm uses a bitmask in order to detect an index overflow during the iteration, and to restart the loop from 0. However, there is no such power of 2 requirement in the ixgbe for the number of descriptors in the RxQ / TxQ. The only requirement is to have a number correctly aligned. If a user requested to configure a number of descriptors which is not a power of 2, as a consequence, during the tear-down, it was possible to be in an infinite loop, and to never reach the exit loop condition. By removing the bitmask and changing the loop method, we can avoid this issue, and allow the user to configure a RxQ / TxQ which is not a power of 2. Fixes: c95584dc2b18 ("ixgbe: new vectorized functions for Rx/Tx") Cc: bruce.richard...@intel.com Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 20 +--- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index adba855ca3..8912558918 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -150,11 +150,14 @@ _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) return; /* release the used mbufs in sw_ring */ - for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1); -i != txq->tx_tail; -i = (i + 1) & max_desc) { + i = txq->tx_next_dd - (txq->tx_rs_thresh - 1); + while (i != txq->tx_tail) { txe = &txq->sw_ring_v[i]; rte_pktmbuf_free_seg(txe->mbuf); + + i = i + 1; + if (i > max_desc) + i = 0; } txq->nb_tx_free = max_desc; @@ -168,7 +171,7 @@ _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) static inline void _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) { - const unsigned int mask = rxq->nb_rx_desc - 1; + const unsigned int max_desc = rxq->nb_rx_desc - 1; unsigned int i; if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc) @@ -181,11 +184,14 @@ _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); } } else { - for (i = rxq->rx_tail; -i != rxq->rxrearm_start; -i = (i + 1) & mask) { + i = rxq->rx_tail; + while (i != rxq->rxrearm_start) { if (rxq->sw_ring[i].mbuf != NULL) rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); + + i = i + 1; + if (i > max_desc) + i = 0; } } -- 2.17.1
Re: [dpdk-dev] [PATCH] net/ixgbe: fix RxQ/TxQ release
Hello, On 28/09/2021 05:21, Wang, Haiyue wrote: -Original Message- From: Wang, Haiyue Sent: Tuesday, September 28, 2021 11:06 To: 'Julien Meunier' ; dev@dpdk.org Cc: sta...@dpdk.org; Richardson, Bruce Subject: RE: [PATCH] net/ixgbe: fix RxQ/TxQ release -Original Message----- From: Julien Meunier Sent: Tuesday, September 28, 2021 01:18 To: dev@dpdk.org Cc: sta...@dpdk.org; Richardson, Bruce ; Wang, Haiyue Subject: [PATCH] net/ixgbe: fix RxQ/TxQ release On the vector implementation, during the tear-down, the mbufs not drained in the RxQ and TxQ are freed based on an algorithm which supposed that the number of descriptors is a power of 2 (max_desc). Based on this hypothesis, this algorithm uses a bitmask in order to detect an index overflow during the iteration, and to restart the loop from 0. However, there is no such power of 2 requirement in the ixgbe for the number of descriptors in the RxQ / TxQ. The only requirement is to have a number correctly aligned. If a user requested to configure a number of descriptors which is not a power of 2, as a consequence, during the tear-down, it was possible to be in an infinite loop, and to never reach the exit loop condition. Are you able to setup not a power of 2 successfully ? My fault, yes, possible. ;-) Yes, we have some usecases where the nb of descriptiors for the TxQ is set to 1536. I modified the test_pmd_perf in order to validate this behavior, as my ixgbe X550 supports the loopback mode: - nb_desc = 2048 => txq is drained and stopped correctly - nb_desc = 1536 => freeze during the teardown int rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf) { ... if (nb_tx_desc > dev_info.tx_desc_lim.nb_max || nb_tx_desc < dev_info.tx_desc_lim.nb_min || nb_tx_desc % dev_info.tx_desc_lim.nb_align != 0) { RTE_ETHDEV_LOG(ERR, "Invalid value for nb_tx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu\n", nb_tx_desc, dev_info.tx_desc_lim.nb_max, dev_info.tx_desc_lim.nb_min, dev_info.tx_desc_lim.nb_align); return -EINVAL; } ... } By removing the bitmask and changing the loop method, we can avoid this issue, and allow the user to configure a RxQ / TxQ which is not a power of 2. Fixes: c95584dc2b18 ("ixgbe: new vectorized functions for Rx/Tx") Cc: bruce.richard...@intel.com Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 20 +--- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index adba855ca3..8912558918 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -150,11 +150,14 @@ _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) return; Just one line ? i = (i + 1) % txq->nb_tx_desc Ah yes, I was too focused with this bitmask... The shorter, the better. I will send a V2 today. Thanks for this feedback ! /* release the used mbufs in sw_ring */ - for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1); -i != txq->tx_tail; -i = (i + 1) & max_desc) { + i = txq->tx_next_dd - (txq->tx_rs_thresh - 1); + while (i != txq->tx_tail) { txe = &txq->sw_ring_v[i]; rte_pktmbuf_free_seg(txe->mbuf); + + i = i + 1; + if (i > max_desc) + i = 0; } txq->nb_tx_free = max_desc; @@ -168,7 +171,7 @@ _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) static inline void _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) { - const unsigned int mask = rxq->nb_rx_desc - 1; + const unsigned int max_desc = rxq->nb_rx_desc - 1; unsigned int i; if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc) @@ -181,11 +184,14 @@ _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); } } else { - for (i = rxq->rx_tail; -i != rxq->rxrearm_start; -i = (i + 1) & mask) { + i = rxq->rx_tail; + while (i != rxq->rxrearm_start) { if (rxq->sw_ring[i].mbuf != NULL) rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); + + i = i + 1; + if (i > max_desc) + i = 0; } } -- 2.17.1 -- Julien Meunier
[dpdk-dev] [PATCH v2] net/ixgbe: fix RxQ/TxQ release
On the vector implementation, during the tear-down, the mbufs not drained in the RxQ and TxQ are freed based on an algorithm which supposed that the number of descriptors is a power of 2 (max_desc). Based on this hypothesis, this algorithm uses a bitmask in order to detect an index overflow during the iteration, and to restart the loop from 0. However, there is no such power of 2 requirement in the ixgbe for the number of descriptors in the RxQ / TxQ. The only requirement is to have a number correctly aligned. If a user requested to configure a number of descriptors which is not a power of 2, as a consequence, during the tear-down, it was possible to be in an infinite loop, and to never reach the exit loop condition. By removing the bitmask and changing the loop method, we can avoid this issue, and allow the user to configure a RxQ / TxQ which is not a power of 2. Fixes: c95584dc2b18 ("ixgbe: new vectorized functions for Rx/Tx") Cc: bruce.richard...@intel.com Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index adba855ca3..005e60668a 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -152,7 +152,7 @@ _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) /* release the used mbufs in sw_ring */ for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1); i != txq->tx_tail; -i = (i + 1) & max_desc) { +i = (i + 1) % txq->nb_tx_desc) { txe = &txq->sw_ring_v[i]; rte_pktmbuf_free_seg(txe->mbuf); } @@ -168,7 +168,6 @@ _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) static inline void _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) { - const unsigned int mask = rxq->nb_rx_desc - 1; unsigned int i; if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc) @@ -183,7 +182,7 @@ _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) } else { for (i = rxq->rx_tail; i != rxq->rxrearm_start; -i = (i + 1) & mask) { +i = (i + 1) % rxq->nb_rx_desc) { if (rxq->sw_ring[i].mbuf != NULL) rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); } -- 2.17.1
[PATCH] net/fm10k: fix cleanup during init failure
Cleanup was not done on this PMD if a error is seen during the init: - possible memory leak due to a missing free - interrupt handler was not disabled: if an IRQ is received after the init, a SIGSEGV can be seen (private data stored in rte_eth_devices[port_id] is pointing to NULL) Fixes: a6061d9e7075 ("fm10k: register PF driver") Fixes: 4c287332c39a ("fm10k: add PF and VF interrupt handling") Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- drivers/net/fm10k/fm10k_ethdev.c | 39 +++- 1 file changed, 33 insertions(+), 6 deletions(-) diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index fa0d16277e..7b490bea17 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -3058,7 +3058,7 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev) struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = pdev->intr_handle; - int diag, i; + int diag, i, ret; struct fm10k_macvlan_filter_info *macvlan; PMD_INIT_FUNC_TRACE(); @@ -3147,21 +3147,24 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev) diag = fm10k_stats_reset(dev); if (diag != 0) { PMD_INIT_LOG(ERR, "Stats reset failed: %d", diag); - return diag; + ret = diag; + goto err_stat; } /* Reset the hw */ diag = fm10k_reset_hw(hw); if (diag != FM10K_SUCCESS) { PMD_INIT_LOG(ERR, "Hardware reset failed: %d", diag); - return -EIO; + ret = -EIO; + goto err_reset_hw; } /* Setup mailbox service */ diag = fm10k_setup_mbx_service(hw); if (diag != FM10K_SUCCESS) { PMD_INIT_LOG(ERR, "Failed to setup mailbox: %d", diag); - return -EIO; + ret = -EIO; + goto err_mbx; } /*PF/VF has different interrupt handling mechanism */ @@ -3200,7 +3203,8 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev) if (switch_ready == false) { PMD_INIT_LOG(ERR, "switch is not ready"); - return -1; + ret = -1; + goto err_switch_ready; } } @@ -3235,7 +3239,8 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev) if (!hw->mac.default_vid) { PMD_INIT_LOG(ERR, "default VID is not ready"); - return -1; + ret = -1; + goto err_vid; } } @@ -3244,6 +3249,28 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev) MAIN_VSI_POOL_NUMBER); return 0; + +err_vid: +err_switch_ready: + rte_intr_disable(intr_handle); + + if (hw->mac.type == fm10k_mac_pf) { + fm10k_dev_disable_intr_pf(dev); + rte_intr_callback_unregister(intr_handle, + fm10k_dev_interrupt_handler_pf, (void *)dev); + } else { + fm10k_dev_disable_intr_vf(dev); + rte_intr_callback_unregister(intr_handle, + fm10k_dev_interrupt_handler_vf, (void *)dev); + } + +err_mbx: +err_reset_hw: +err_stat: + rte_free(dev->data->mac_addrs); + dev->data->mac_addrs = NULL; + + return ret; } static int -- 2.34.1
[dpdk-dev] [PATCH] stack: reload head when pop fails (generic)
The previous fix 18effad9cfa7 ("stack: reload head when pop fails") only changed C11 implementation, not generic implementation. List head must be loaded right before continue (when failed to find the new head). Without this, one thread might keep trying and failing to pop items without ever loading the new correct head. Fixes: 3340202f5954 ("stack: add lock-free implementation") Cc: gage.e...@intel.com Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- lib/stack/rte_stack_lf_generic.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/lib/stack/rte_stack_lf_generic.h b/lib/stack/rte_stack_lf_generic.h index 4850a05ee7..7fa29cedb2 100644 --- a/lib/stack/rte_stack_lf_generic.h +++ b/lib/stack/rte_stack_lf_generic.h @@ -128,8 +128,10 @@ __rte_stack_lf_pop_elems(struct rte_stack_lf_list *list, /* If NULL was encountered, the list was modified while * traversing it. Retry. */ - if (i != num) + if (i != num) { + old_head = list->head; continue; + } new_head.top = tmp; new_head.cnt = old_head.cnt + 1; -- 2.17.1
[dpdk-dev] [PATCH] test/pmd_perf: change the way to drain the port
If the port has received less than ``pkt_per_port`` packets (for example, the port has missed some packets), the test is in an infinite loop. Instead of expecting a number of packet to receive, let the port to be drained by itself. If no more packets are received, the test can continue. Fixes: 002ade70e933 ("app/test: measure cycles per packet in Rx/Tx") Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- test/test/test_pmd_perf.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/test/test/test_pmd_perf.c b/test/test/test_pmd_perf.c index f5095c8..286e09d 100644 --- a/test/test/test_pmd_perf.c +++ b/test/test/test_pmd_perf.c @@ -493,15 +493,15 @@ main_loop(__rte_unused void *args) for (i = 0; i < conf->nb_ports; i++) { portid = conf->portlist[i]; - int nb_free = pkt_per_port; + int nb_free = 0; do { /* dry out */ nb_rx = rte_eth_rx_burst(portid, 0, pkts_burst, MAX_PKT_BURST); nb_tx = 0; while (nb_tx < nb_rx) rte_pktmbuf_free(pkts_burst[nb_tx++]); - nb_free -= nb_rx; - } while (nb_free != 0); + nb_free += nb_rx; + } while (nb_rx != 0); printf("free %d mbuf left in port %u\n", pkt_per_port, portid); } -- 2.10.2
[dpdk-dev] [PATCH] net/fm10k: initialize sm_down variable
Fixes: 6f22f2f67268 ("net/fm10k: redefine link status semantics") Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- drivers/net/fm10k/fm10k_ethdev.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index 85fb6c5..caf4d1b 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -3003,6 +3003,7 @@ fm10k_params_init(struct rte_eth_dev *dev) hw->bus.payload = fm10k_bus_payload_256; info->rx_vec_allowed = true; + info->sm_down = false; } static int -- 2.10.2
[dpdk-dev] [PATCH] net/ixgbe: add support of loopback for X540/X550
Loopback mode is also supported on X540 and X550 NICs, according to their datasheet (section 15.2). The way to set it up is a little different of the 82599. Signed-off-by: Julien Meunier --- drivers/net/ixgbe/ixgbe_ethdev.c | 10 ++--- drivers/net/ixgbe/ixgbe_ethdev.h | 5 ++--- drivers/net/ixgbe/ixgbe_rxtx.c | 47 ++-- 3 files changed, 49 insertions(+), 13 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 7493110..7eb3303 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -2652,9 +2652,13 @@ ixgbe_dev_start(struct rte_eth_dev *dev) goto error; } - /* Skip link setup if loopback mode is enabled for 82599. */ - if (hw->mac.type == ixgbe_mac_82599EB && - dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) + /* Skip link setup if loopback mode is enabled. */ + if ((hw->mac.type == ixgbe_mac_82599EB || +hw->mac.type == ixgbe_mac_X540 || +hw->mac.type == ixgbe_mac_X550 || +hw->mac.type == ixgbe_mac_X550EM_x || +hw->mac.type == ixgbe_mac_X550EM_a) && + dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_TX_RX) goto skip_link_setup; if (ixgbe_is_sfp(hw) && hw->phy.multispeed_fiber) { diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h index 565c69c..c60a697 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/ixgbe/ixgbe_ethdev.h @@ -65,9 +65,8 @@ #define IXGBE_QUEUE_ITR_INTERVAL_DEFAULT 500 /* 500us */ /* Loopback operation modes */ -/* 82599 specific loopback operation types */ -#define IXGBE_LPBK_82599_NONE 0x0 /* Default value. Loopback is disabled. */ -#define IXGBE_LPBK_82599_TX_RX 0x1 /* Tx->Rx loopback operation is enabled. */ +#define IXGBE_LPBK_NONE 0x0 /* Default value. Loopback is disabled. */ +#define IXGBE_LPBK_TX_RX 0x1 /* Tx->Rx loopback operation is enabled. */ #define IXGBE_MAX_JUMBO_FRAME_SIZE 0x2600 /* Maximum Jumbo frame size. */ diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 9a79d18..0ef7fdf 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -4879,10 +4879,14 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) hlreg0 &= ~IXGBE_HLREG0_JUMBOEN; /* -* If loopback mode is configured for 82599, set LPBK bit. +* If loopback mode is configured, set LPBK bit. */ - if (hw->mac.type == ixgbe_mac_82599EB && - dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) + if ((hw->mac.type == ixgbe_mac_82599EB || +hw->mac.type == ixgbe_mac_X540 || +hw->mac.type == ixgbe_mac_X550 || +hw->mac.type == ixgbe_mac_X550EM_x || +hw->mac.type == ixgbe_mac_X550EM_a) && + dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_TX_RX) hlreg0 |= IXGBE_HLREG0_LPBK; else hlreg0 &= ~IXGBE_HLREG0_LPBK; @@ -5088,6 +5092,29 @@ ixgbe_setup_loopback_link_82599(struct ixgbe_hw *hw) msec_delay(50); } +/* + * Set up link loopback for X540 / X550 mode Tx->Rx. + */ +static inline void __attribute__((cold)) +ixgbe_setup_loopback_link_x540_x550(struct ixgbe_hw *hw) +{ + uint32_t macc; + PMD_INIT_FUNC_TRACE(); + + /* datasheet 15.2.1: MACC.FLU = 1 (force link up) */ + macc = IXGBE_READ_REG(hw, IXGBE_MACC); + macc |= IXGBE_MACC_FLU; + IXGBE_WRITE_REG(hw, IXGBE_MACC, macc); + + /* Restart link */ + IXGBE_WRITE_REG(hw, + IXGBE_AUTOC, + IXGBE_AUTOC_LMS_10G_LINK_NO_AN | IXGBE_AUTOC_FLU); + + hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_MAC_CSR_SM); + msec_delay(50); +} + /* * Start Transmit and Receive Units. @@ -5148,10 +5175,16 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev) rxctrl |= IXGBE_RXCTRL_RXEN; hw->mac.ops.enable_rx_dma(hw, rxctrl); - /* If loopback mode is enabled for 82599, set up the link accordingly */ - if (hw->mac.type == ixgbe_mac_82599EB && - dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) - ixgbe_setup_loopback_link_82599(hw); + /* If loopback mode is enabled, set up the link accordingly */ + if (dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_TX_RX) { + if (hw->mac.type == ixgbe_mac_82599EB) + ixgbe_setup_loopback_link_82599(hw); + else if (hw->mac.type == ixgbe_mac_X540 || +hw->mac.type == ixgbe_mac_X550 || +hw->mac.type == ixgbe_mac_X550EM_x || +hw
[dpdk-dev] [PATCH] net/fm10k: add imissed stats
Add support of imissed and q_errors statistics, reported by PCIE_QPRDC register (see datasheet, section 11.27.2.60), which exposes the number of receive packets dropped for a queue. Signed-off-by: Julien Meunier --- drivers/net/fm10k/fm10k_ethdev.c | 7 +-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index 541a49b..a9af6c2 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -1325,7 +1325,7 @@ fm10k_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, static int fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - uint64_t ipackets, opackets, ibytes, obytes; + uint64_t ipackets, opackets, ibytes, obytes, imissed; struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct fm10k_hw_stats *hw_stats = @@ -1336,22 +1336,25 @@ fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) fm10k_update_hw_stats(hw, hw_stats); - ipackets = opackets = ibytes = obytes = 0; + ipackets = opackets = ibytes = obytes = imissed = 0; for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) && (i < hw->mac.max_queues); ++i) { stats->q_ipackets[i] = hw_stats->q[i].rx_packets.count; stats->q_opackets[i] = hw_stats->q[i].tx_packets.count; stats->q_ibytes[i] = hw_stats->q[i].rx_bytes.count; stats->q_obytes[i] = hw_stats->q[i].tx_bytes.count; + stats->q_errors[i] = hw_stats->q[i].rx_drops.count; ipackets += stats->q_ipackets[i]; opackets += stats->q_opackets[i]; ibytes += stats->q_ibytes[i]; obytes += stats->q_obytes[i]; + imissed += stats->q_errors[i]; } stats->ipackets = ipackets; stats->opackets = opackets; stats->ibytes = ibytes; stats->obytes = obytes; + stats->imissed = imissed; return 0; } -- 2.10.2
[dpdk-dev] [PATCH v2] test/pmd_perf: fix the way to drain the port
If the port has received less than ``pkt_per_port`` packets (for example, the port has missed some packets), the test is in an infinite loop. Instead of expecting a number of packet to receive, let the port to be drained by itself. If no more packets are received, the test can continue. Fixes: 002ade70e933 ("app/test: measure cycles per packet in Rx/Tx") Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- v2: * rename commit title * fix nb_free display --- test/test/test_pmd_perf.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/test/test/test_pmd_perf.c b/test/test/test_pmd_perf.c index f5095c8..c7e2df3 100644 --- a/test/test/test_pmd_perf.c +++ b/test/test/test_pmd_perf.c @@ -493,16 +493,16 @@ main_loop(__rte_unused void *args) for (i = 0; i < conf->nb_ports; i++) { portid = conf->portlist[i]; - int nb_free = pkt_per_port; + int nb_free = 0; do { /* dry out */ nb_rx = rte_eth_rx_burst(portid, 0, pkts_burst, MAX_PKT_BURST); nb_tx = 0; while (nb_tx < nb_rx) rte_pktmbuf_free(pkts_burst[nb_tx++]); - nb_free -= nb_rx; - } while (nb_free != 0); - printf("free %d mbuf left in port %u\n", pkt_per_port, portid); + nb_free += nb_rx; + } while (nb_rx != 0); + printf("free %d mbuf left in port %u\n", nb_free, portid); } if (count == 0) -- 2.10.2
[dpdk-dev] [PATCH 2/2] net/ixgbe: add support of loopback for X540/X550
Loopback mode is also supported on X540 and X550 NICs, according to their datasheet (section 15.2). The way to set it up is a little different of the 82599. Signed-off-by: Julien Meunier --- v2: - disable / enable autoneg when loopback is requested for X540 / X550 --- drivers/net/ixgbe/base/ixgbe_type.h | 1 + drivers/net/ixgbe/base/ixgbe_x540.c | 26 drivers/net/ixgbe/base/ixgbe_x540.h | 2 ++ drivers/net/ixgbe/base/ixgbe_x550.c | 26 drivers/net/ixgbe/base/ixgbe_x550.h | 1 + drivers/net/ixgbe/ixgbe_ethdev.h| 5 ++- drivers/net/ixgbe/ixgbe_rxtx.c | 61 +++-- 7 files changed, 117 insertions(+), 5 deletions(-) diff --git a/drivers/net/ixgbe/base/ixgbe_type.h b/drivers/net/ixgbe/base/ixgbe_type.h index 077b8f0..c4af31e 100644 --- a/drivers/net/ixgbe/base/ixgbe_type.h +++ b/drivers/net/ixgbe/base/ixgbe_type.h @@ -1649,6 +1649,7 @@ struct ixgbe_dmac_config { #define IXGBE_MII_5GBASE_T_ADVERTISE 0x0800 #define IXGBE_MII_100BASE_T_ADVERTISE 0x0100 /* full duplex, bit:8 */ #define IXGBE_MII_100BASE_T_ADVERTISE_HALF 0x0080 /* half duplex, bit:7 */ +#define IXGBE_MII_AUTONEG_ENABLE 0x1000 #define IXGBE_MII_RESTART 0x200 #define IXGBE_MII_AUTONEG_COMPLETE 0x20 #define IXGBE_MII_AUTONEG_LINK_UP 0x04 diff --git a/drivers/net/ixgbe/base/ixgbe_x540.c b/drivers/net/ixgbe/base/ixgbe_x540.c index f00f0ea..a241d41 100644 --- a/drivers/net/ixgbe/base/ixgbe_x540.c +++ b/drivers/net/ixgbe/base/ixgbe_x540.c @@ -1032,3 +1032,29 @@ s32 ixgbe_blink_led_stop_X540(struct ixgbe_hw *hw, u32 index) return IXGBE_SUCCESS; } + +/* + * ixgbe_setup_phy_link_x540 - Enable/disable the autoneg + * @hw: pointer to hardware structure + * enable: enable/disable the autoneg + **/ +s32 ixgbe_setup_phy_autoneg_x540(struct ixgbe_hw *hw, bool enable) +{ + s32 status = IXGBE_SUCCESS; + u16 autoneg_reg = IXGBE_MII_AUTONEG_REG; + + DEBUGFUNC("ixgbe_setup_phy_autoneg_x540"); + + hw->phy.ops.read_reg(hw, IXGBE_MDIO_AUTO_NEG_CONTROL, +IXGBE_MDIO_AUTO_NEG_DEV_TYPE, &autoneg_reg); + + if (enable) + autoneg_reg |= IXGBE_MII_AUTONEG_ENABLE; + else + autoneg_reg &= ~IXGBE_MII_AUTONEG_ENABLE; + + hw->phy.ops.write_reg(hw, IXGBE_MDIO_AUTO_NEG_CONTROL, + IXGBE_MDIO_AUTO_NEG_DEV_TYPE, autoneg_reg); + + return status; +} diff --git a/drivers/net/ixgbe/base/ixgbe_x540.h b/drivers/net/ixgbe/base/ixgbe_x540.h index 231dfe5..ef939ce 100644 --- a/drivers/net/ixgbe/base/ixgbe_x540.h +++ b/drivers/net/ixgbe/base/ixgbe_x540.h @@ -34,5 +34,7 @@ void ixgbe_init_swfw_sync_X540(struct ixgbe_hw *hw); s32 ixgbe_blink_led_start_X540(struct ixgbe_hw *hw, u32 index); s32 ixgbe_blink_led_stop_X540(struct ixgbe_hw *hw, u32 index); + +s32 ixgbe_setup_phy_autoneg_x540(struct ixgbe_hw *hw, bool enable); #endif /* _IXGBE_X540_H_ */ diff --git a/drivers/net/ixgbe/base/ixgbe_x550.c b/drivers/net/ixgbe/base/ixgbe_x550.c index a920a14..f4ee188 100644 --- a/drivers/net/ixgbe/base/ixgbe_x550.c +++ b/drivers/net/ixgbe/base/ixgbe_x550.c @@ -4652,3 +4652,29 @@ bool ixgbe_fw_recovery_mode_X550(struct ixgbe_hw *hw) return !!(fwsm & IXGBE_FWSM_FW_NVM_RECOVERY_MODE); } + +/* + * ixgbe_setup_phy_link_x550 - Enable/disable the autoneg + * @hw: pointer to hardware structure + * enable: enable/disable the autoneg + **/ +s32 ixgbe_setup_phy_autoneg_x550(struct ixgbe_hw *hw, bool enable) +{ + s32 status = IXGBE_SUCCESS; + u16 autoneg_reg = IXGBE_MII_AUTONEG_REG; + + DEBUGFUNC("ixgbe_setup_phy_autoneg_x550"); + + hw->phy.ops.read_reg(hw, IXGBE_MDIO_AUTO_NEG_CONTROL, +IXGBE_MDIO_AUTO_NEG_DEV_TYPE, &autoneg_reg); + + if (enable) + autoneg_reg |= IXGBE_MII_AUTONEG_ENABLE; + else + autoneg_reg &= ~IXGBE_MII_AUTONEG_ENABLE; + + hw->phy.ops.write_reg(hw, IXGBE_MDIO_AUTO_NEG_CONTROL, + IXGBE_MDIO_AUTO_NEG_DEV_TYPE, autoneg_reg); + + return status; +} diff --git a/drivers/net/ixgbe/base/ixgbe_x550.h b/drivers/net/ixgbe/base/ixgbe_x550.h index 3bd98f2..dd4fa18 100644 --- a/drivers/net/ixgbe/base/ixgbe_x550.h +++ b/drivers/net/ixgbe/base/ixgbe_x550.h @@ -93,4 +93,5 @@ s32 ixgbe_identify_sfp_module_X550em(struct ixgbe_hw *hw); s32 ixgbe_led_on_t_X550em(struct ixgbe_hw *hw, u32 led_idx); s32 ixgbe_led_off_t_X550em(struct ixgbe_hw *hw, u32 led_idx); bool ixgbe_fw_recovery_mode_X550(struct ixgbe_hw *hw); +s32 ixgbe_setup_phy_autoneg_x550(struct ixgbe_hw *hw, bool enable); #endif /* _IXGBE_X550_H_ */ diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h index 565c69c..c60a697 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/ixgbe/ixgbe_ethdev.h @@ -65,
[dpdk-dev] [PATCH 1/2] net/ixgbe: do not start on unsupported loopback mode
Only TX->RX loopback is supported currently on 82599EB. If a user wants to apply an another loopback configuration (!= IXGBE_LPBK_82599_TX_RX), ixgbe PMD ignores it and continues the configuration without raising any error. Let's robustify this part by checking if the requested loopback mode is correct for the current device, before starting it. If it is not valid, PMD will refuse to start. Signed-off-by: Julien Meunier --- v2: - factorize code - check if loopback is really supported --- drivers/net/ixgbe/ixgbe_ethdev.c | 13 + drivers/net/ixgbe/ixgbe_rxtx.c | 37 + drivers/net/ixgbe/ixgbe_rxtx.h | 2 ++ 3 files changed, 40 insertions(+), 12 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 7493110..558f60b 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -2652,10 +2652,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev) goto error; } - /* Skip link setup if loopback mode is enabled for 82599. */ - if (hw->mac.type == ixgbe_mac_82599EB && - dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) - goto skip_link_setup; + /* Skip link setup if loopback mode is enabled. */ + if (dev->data->dev_conf.lpbk_mode != 0) { + err = ixgbe_check_supported_loopback_mode(dev); + if (err < 0) { + PMD_INIT_LOG(ERR, "Unsupported loopback mode"); + goto error; + } else + goto skip_link_setup; + } if (ixgbe_is_sfp(hw) && hw->phy.multispeed_fiber) { err = hw->mac.ops.setup_sfp(hw); diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 9a79d18..c9a70a8 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -4879,13 +4879,18 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) hlreg0 &= ~IXGBE_HLREG0_JUMBOEN; /* -* If loopback mode is configured for 82599, set LPBK bit. +* If loopback mode is configured, set LPBK bit. */ - if (hw->mac.type == ixgbe_mac_82599EB && - dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) + if (dev->data->dev_conf.lpbk_mode != 0) { + rc = ixgbe_check_supported_loopback_mode(dev); + if (rc < 0) { + PMD_INIT_LOG(ERR, "Unsupported loopback mode"); + return rc; + } hlreg0 |= IXGBE_HLREG0_LPBK; - else + } else { hlreg0 &= ~IXGBE_HLREG0_LPBK; + } IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0); @@ -5062,6 +5067,21 @@ ixgbe_dev_tx_init(struct rte_eth_dev *dev) } /* + * Check if requested loopback mode is supported + */ +int +ixgbe_check_supported_loopback_mode(struct rte_eth_dev *dev) +{ + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) + if (hw->mac.type == ixgbe_mac_82599EB) + return 0; + + return -ENOTSUP; +} + +/* * Set up link for 82599 loopback mode Tx->Rx. */ static inline void __attribute__((cold)) @@ -5148,10 +5168,11 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev) rxctrl |= IXGBE_RXCTRL_RXEN; hw->mac.ops.enable_rx_dma(hw, rxctrl); - /* If loopback mode is enabled for 82599, set up the link accordingly */ - if (hw->mac.type == ixgbe_mac_82599EB && - dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) - ixgbe_setup_loopback_link_82599(hw); + /* If loopback mode is enabled, set up the link accordingly */ + if (dev->data->dev_conf.lpbk_mode != 0) { + if (hw->mac.type == ixgbe_mac_82599EB) + ixgbe_setup_loopback_link_82599(hw); + } #ifdef RTE_LIBRTE_SECURITY if ((dev->data->dev_conf.rxmode.offloads & diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 39378f7..2d8011d 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -276,6 +276,8 @@ void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq); */ void ixgbe_set_rx_function(struct rte_eth_dev *dev); +int ixgbe_check_supported_loopback_mode(struct rte_eth_dev *dev); + uint16_t ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint16_t ixgbe_recv_scattered_pkts_vec(void *rx_queue, -- 2.10.2
[dpdk-dev] [PATCH v3 2/2] net/ixgbe: add support of loopback for X540/X550
Loopback mode is also supported on X540 and X550 NICs, according to their datasheet (section 15.2). The way to set it up is a little different of the 82599. Signed-off-by: Julien Meunier --- v3: - reorganize and merge common code - restore MACC_FLU on stop v2: - disable / enable autoneg when loopback is requested for X540 / X550 --- drivers/net/ixgbe/ixgbe_ethdev.h | 7 +++--- drivers/net/ixgbe/ixgbe_rxtx.c | 53 ++-- 2 files changed, 55 insertions(+), 5 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h index 565c69c..99a5077 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/ixgbe/ixgbe_ethdev.h @@ -65,9 +65,10 @@ #define IXGBE_QUEUE_ITR_INTERVAL_DEFAULT 500 /* 500us */ /* Loopback operation modes */ -/* 82599 specific loopback operation types */ -#define IXGBE_LPBK_82599_NONE 0x0 /* Default value. Loopback is disabled. */ -#define IXGBE_LPBK_82599_TX_RX 0x1 /* Tx->Rx loopback operation is enabled. */ +#define IXGBE_LPBK_NONE 0x0 /* Default value. Loopback is disabled. */ +#define IXGBE_LPBK_TX_RX 0x1 /* Tx->Rx loopback operation is enabled. */ +/* X540-X550 specific loopback operations */ +#define IXGBE_MII_AUTONEG_ENABLE0x1000 /* Auto-negociation enable (default = 1) */ #define IXGBE_MAX_JUMBO_FRAME_SIZE 0x2600 /* Maximum Jumbo frame size. */ diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index c9a70a8..e92a70f 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -3168,12 +3168,44 @@ ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset) return RTE_ETH_TX_DESC_FULL; } +/* + * Set up link loopback for X540/X550 mode Tx->Rx. + */ +static inline void __attribute__((cold)) +ixgbe_setup_loopback_link_x540_x550(struct ixgbe_hw *hw, bool enable) +{ + uint32_t macc; + PMD_INIT_FUNC_TRACE(); + + u16 autoneg_reg = IXGBE_MII_AUTONEG_REG; + + hw->phy.ops.read_reg(hw, IXGBE_MDIO_AUTO_NEG_CONTROL, +IXGBE_MDIO_AUTO_NEG_DEV_TYPE, &autoneg_reg); + macc = IXGBE_READ_REG(hw, IXGBE_MACC); + + if (enable) { + /* datasheet 15.2.1: disable AUTONEG (PHY Bit 7.0.C) */ + autoneg_reg |= IXGBE_MII_AUTONEG_ENABLE; + /* datasheet 15.2.1: MACC.FLU = 1 (force link up) */ + macc |= IXGBE_MACC_FLU; + } else { + autoneg_reg &= ~IXGBE_MII_AUTONEG_ENABLE; + macc &= ~IXGBE_MACC_FLU; + } + + hw->phy.ops.write_reg(hw, IXGBE_MDIO_AUTO_NEG_CONTROL, + IXGBE_MDIO_AUTO_NEG_DEV_TYPE, autoneg_reg); + + IXGBE_WRITE_REG(hw, IXGBE_MACC, macc); +} + void __attribute__((cold)) ixgbe_dev_clear_queues(struct rte_eth_dev *dev) { unsigned i; struct ixgbe_adapter *adapter = (struct ixgbe_adapter *)dev->data->dev_private; + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); PMD_INIT_FUNC_TRACE(); @@ -3194,6 +3226,14 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev) ixgbe_reset_rx_queue(adapter, rxq); } } + /* If loopback mode was enabled, reconfigure the link accordingly */ + if (dev->data->dev_conf.lpbk_mode != 0) { + if (hw->mac.type == ixgbe_mac_X540 || +hw->mac.type == ixgbe_mac_X550 || +hw->mac.type == ixgbe_mac_X550EM_x || +hw->mac.type == ixgbe_mac_X550EM_a) + ixgbe_setup_loopback_link_x540_x550(hw, false); + } } void @@ -5074,8 +5114,12 @@ ixgbe_check_supported_loopback_mode(struct rte_eth_dev *dev) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if (dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) - if (hw->mac.type == ixgbe_mac_82599EB) + if (dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_TX_RX) + if (hw->mac.type == ixgbe_mac_82599EB || +hw->mac.type == ixgbe_mac_X540 || +hw->mac.type == ixgbe_mac_X550 || +hw->mac.type == ixgbe_mac_X550EM_x || +hw->mac.type == ixgbe_mac_X550EM_a) return 0; return -ENOTSUP; @@ -5172,6 +5216,11 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev) if (dev->data->dev_conf.lpbk_mode != 0) { if (hw->mac.type == ixgbe_mac_82599EB) ixgbe_setup_loopback_link_82599(hw); + else if (hw->mac.type == ixgbe_mac_X540 || +hw->mac.type == ixgbe_mac_X550 || +hw->mac.type == ixgbe_mac_X550EM_x || +hw->mac.type == ixgbe_mac_X550EM_a) + ixgbe_setup_loopback_link_x540_x550(hw, true); } #ifdef RTE_LIBRTE_SECURITY -- 2.10.2
[dpdk-dev] [PATCH v3 1/2] net/ixgbe: do not start on unsupported loopback mode
Only TX->RX loopback is supported currently on 82599EB. If a user wants to apply an another loopback configuration (!= IXGBE_LPBK_82599_TX_RX), ixgbe PMD ignores it and continues the configuration without raising any error. Let's robustify this part by checking if the requested loopback mode is correct for the current device, before starting it. If it is not valid, PMD will refuse to start. Signed-off-by: Julien Meunier --- v3: - code style + checkpatch compliance v2: - factorize code - check if loopback is really supported --- drivers/net/ixgbe/ixgbe_ethdev.c | 14 ++ drivers/net/ixgbe/ixgbe_rxtx.c | 37 + drivers/net/ixgbe/ixgbe_rxtx.h | 1 + 3 files changed, 40 insertions(+), 12 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 7493110..4deedb0 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -2652,10 +2652,16 @@ ixgbe_dev_start(struct rte_eth_dev *dev) goto error; } - /* Skip link setup if loopback mode is enabled for 82599. */ - if (hw->mac.type == ixgbe_mac_82599EB && - dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) - goto skip_link_setup; + /* Skip link setup if loopback mode is enabled. */ + if (dev->data->dev_conf.lpbk_mode != 0) { + err = ixgbe_check_supported_loopback_mode(dev); + if (err < 0) { + PMD_INIT_LOG(ERR, "Unsupported loopback mode"); + goto error; + } else { + goto skip_link_setup; + } + } if (ixgbe_is_sfp(hw) && hw->phy.multispeed_fiber) { err = hw->mac.ops.setup_sfp(hw); diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 9a79d18..c9a70a8 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -4879,13 +4879,18 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) hlreg0 &= ~IXGBE_HLREG0_JUMBOEN; /* -* If loopback mode is configured for 82599, set LPBK bit. +* If loopback mode is configured, set LPBK bit. */ - if (hw->mac.type == ixgbe_mac_82599EB && - dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) + if (dev->data->dev_conf.lpbk_mode != 0) { + rc = ixgbe_check_supported_loopback_mode(dev); + if (rc < 0) { + PMD_INIT_LOG(ERR, "Unsupported loopback mode"); + return rc; + } hlreg0 |= IXGBE_HLREG0_LPBK; - else + } else { hlreg0 &= ~IXGBE_HLREG0_LPBK; + } IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0); @@ -5062,6 +5067,21 @@ ixgbe_dev_tx_init(struct rte_eth_dev *dev) } /* + * Check if requested loopback mode is supported + */ +int +ixgbe_check_supported_loopback_mode(struct rte_eth_dev *dev) +{ + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) + if (hw->mac.type == ixgbe_mac_82599EB) + return 0; + + return -ENOTSUP; +} + +/* * Set up link for 82599 loopback mode Tx->Rx. */ static inline void __attribute__((cold)) @@ -5148,10 +5168,11 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev) rxctrl |= IXGBE_RXCTRL_RXEN; hw->mac.ops.enable_rx_dma(hw, rxctrl); - /* If loopback mode is enabled for 82599, set up the link accordingly */ - if (hw->mac.type == ixgbe_mac_82599EB && - dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX) - ixgbe_setup_loopback_link_82599(hw); + /* If loopback mode is enabled, set up the link accordingly */ + if (dev->data->dev_conf.lpbk_mode != 0) { + if (hw->mac.type == ixgbe_mac_82599EB) + ixgbe_setup_loopback_link_82599(hw); + } #ifdef RTE_LIBRTE_SECURITY if ((dev->data->dev_conf.rxmode.offloads & diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 39378f7..505d344 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -276,6 +276,7 @@ void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq); */ void ixgbe_set_rx_function(struct rte_eth_dev *dev); +int ixgbe_check_supported_loopback_mode(struct rte_eth_dev *dev); uint16_t ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint16_t ixgbe_recv_scattered_pkts_vec(void *rx_queue, -- 2.10.2
[dpdk-dev] [PATCH v3] test/pmd_perf: fix the way to drain the port
If the port has received less than ``pkt_per_port`` packets (for example, the port has missed some packets), the test is in an infinite loop. Instead of expecting a number of packet to receive, let the port to be drained by itself. If no more packets are received, the test can continue. Fixes: 002ade70e933 ("app/test: measure cycles per packet in Rx/Tx") Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- v3: * add timeout on stop * add log details v2: * rename commit title * fix nb_free display --- test/test/test_pmd_perf.c | 13 + 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/test/test/test_pmd_perf.c b/test/test/test_pmd_perf.c index f5095c8..ed8524a 100644 --- a/test/test/test_pmd_perf.c +++ b/test/test/test_pmd_perf.c @@ -493,16 +493,21 @@ main_loop(__rte_unused void *args) for (i = 0; i < conf->nb_ports; i++) { portid = conf->portlist[i]; - int nb_free = pkt_per_port; + int nb_free = 0; + uint64_t timeout = 1; do { /* dry out */ nb_rx = rte_eth_rx_burst(portid, 0, pkts_burst, MAX_PKT_BURST); nb_tx = 0; while (nb_tx < nb_rx) rte_pktmbuf_free(pkts_burst[nb_tx++]); - nb_free -= nb_rx; - } while (nb_free != 0); - printf("free %d mbuf left in port %u\n", pkt_per_port, portid); + nb_free += nb_rx; + + if (unlikely(nb_rx == 0)) + timeout--; + } while (nb_free != pkt_per_port && timeout != 0); + printf("free %d (expected %d) mbuf left in port %u\n", nb_free, + pkt_per_port, portid); } if (count == 0) -- 2.10.2
[dpdk-dev] [PATCH] i40e: fix vlan filtering
VLAN filtering was always performed, even if hw_vlan_filter was disabled. During device initialization, default filter RTE_MACVLAN_PERFECT_MATCH was applied. In this situation, all incoming VLAN frames were dropped by the card (increase of the register RUPP - Rx Unsupported Protocol). In order to restore default behavior, if HW VLAN filtering is activated, set a filter to match MAC and VLAN. If not, set a filter to only match MAC. Signed-off-by: Julien Meunier Signed-off-by: David Marchand --- drivers/net/i40e/i40e_ethdev.c | 39 ++- drivers/net/i40e/i40e_ethdev.h | 1 + 2 files changed, 39 insertions(+), 1 deletion(-) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index bf6220d..ef9d578 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -2332,6 +2332,13 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask) struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct i40e_vsi *vsi = pf->main_vsi; + if (mask & ETH_VLAN_FILTER_MASK) { + if (dev->data->dev_conf.rxmode.hw_vlan_filter) + i40e_vsi_config_vlan_filter(vsi, TRUE); + else + i40e_vsi_config_vlan_filter(vsi, FALSE); + } + if (mask & ETH_VLAN_STRIP_MASK) { /* Enable or disable VLAN stripping */ if (dev->data->dev_conf.rxmode.hw_vlan_strip) @@ -4156,6 +4163,34 @@ fail_mem: return NULL; } +/* Configure vlan filter on or off */ +int +i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on) +{ + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + struct i40e_mac_filter_info filter; + int ret; + + rte_memcpy(&filter.mac_addr, + (struct ether_addr *)(hw->mac.perm_addr), ETH_ADDR_LEN); + ret = i40e_vsi_delete_mac(vsi, &filter.mac_addr); + + if (on) { + /* Filter to match MAC and VLAN */ + filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; + } else { + /* Filter to match only MAC */ + filter.filter_type = RTE_MAC_PERFECT_MATCH; + } + + ret |= i40e_vsi_add_mac(vsi, &filter); + + if (ret) + PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan filter", + on ? "enable" : "disable"); + return ret; +} + /* Configure vlan stripping on or off */ int i40e_vsi_config_vlan_stripping(struct i40e_vsi *vsi, bool on) @@ -4203,9 +4238,11 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev) { struct rte_eth_dev_data *data = dev->data; int ret; + int mask = 0; /* Apply vlan offload setting */ - i40e_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK); + mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK; + i40e_vlan_offload_set(dev, mask); /* Apply double-vlan setting, not implemented yet */ diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index 1f9792b..5505d72 100644 --- a/drivers/net/i40e/i40e_ethdev.h +++ b/drivers/net/i40e/i40e_ethdev.h @@ -551,6 +551,7 @@ void i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi); int i40e_vsi_vlan_pvid_set(struct i40e_vsi *vsi, struct i40e_vsi_vlan_pvid_info *info); int i40e_vsi_config_vlan_stripping(struct i40e_vsi *vsi, bool on); +int i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on); uint64_t i40e_config_hena(uint64_t flags); uint64_t i40e_parse_hena(uint64_t flags); enum i40e_status_code i40e_fdir_setup_tx_resources(struct i40e_pf *pf); -- 2.1.4
[dpdk-dev] [PATCH] i40e: fix vlan filtering
Hello, Yes, you are right. Even if VLAN filtering is configured most of the time during initialization, we should managed the case of multiple MAC addresses already configured. I will send you a v2 patch with this modification, use ether_addr_copy and add additional debug messages. Regards, On 01/20/2016 06:00 AM, Zhang, Helin wrote: >> -Original Message- >> From: Julien Meunier [mailto:julien.meunier at 6wind.com] >> Sent: Tuesday, January 19, 2016 1:19 AM >> To: Zhang, Helin >> Cc:dev at dpdk.org >> Subject: [PATCH] i40e: fix vlan filtering >> >> VLAN filtering was always performed, even if hw_vlan_filter was disabled. >> During device initialization, default filter RTE_MACVLAN_PERFECT_MATCH >> was applied. In this situation, all incoming VLAN frames were dropped by the >> card (increase of the register RUPP - Rx Unsupported Protocol). >> >> In order to restore default behavior, if HW VLAN filtering is activated, set >> a >> filter to match MAC and VLAN. If not, set a filter to only match MAC. >> >> Signed-off-by: Julien Meunier >> Signed-off-by: David Marchand >> --- >> drivers/net/i40e/i40e_ethdev.c | 39 >> ++- >> drivers/net/i40e/i40e_ethdev.h | 1 + >> 2 files changed, 39 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c >> index bf6220d..ef9d578 100644 >> --- a/drivers/net/i40e/i40e_ethdev.c >> +++ b/drivers/net/i40e/i40e_ethdev.c >> @@ -2332,6 +2332,13 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, >> int mask) >> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data- >>> dev_private); >> struct i40e_vsi *vsi = pf->main_vsi; >> >> +if (mask & ETH_VLAN_FILTER_MASK) { >> +if (dev->data->dev_conf.rxmode.hw_vlan_filter) >> +i40e_vsi_config_vlan_filter(vsi, TRUE); >> +else >> +i40e_vsi_config_vlan_filter(vsi, FALSE); >> +} >> + >> if (mask & ETH_VLAN_STRIP_MASK) { >> /* Enable or disable VLAN stripping */ >> if (dev->data->dev_conf.rxmode.hw_vlan_strip) >> @@ -4156,6 +4163,34 @@ fail_mem: >> return NULL; >> } >> >> +/* Configure vlan filter on or off */ >> +int >> +i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on) { >> +struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); >> +struct i40e_mac_filter_info filter; >> +int ret; >> + >> +rte_memcpy(&filter.mac_addr, >> + (struct ether_addr *)(hw->mac.perm_addr), >> ETH_ADDR_LEN); >> +ret = i40e_vsi_delete_mac(vsi, &filter.mac_addr); >> + >> +if (on) { >> +/* Filter to match MAC and VLAN */ >> +filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; >> +} else { >> +/* Filter to match only MAC */ >> +filter.filter_type = RTE_MAC_PERFECT_MATCH; >> +} >> + >> +ret |= i40e_vsi_add_mac(vsi, &filter); > How would it be if multiple mac addresses has been configured? > I think this might be ignored in the code changes, right? > > Regards, > Helin > >> + >> +if (ret) >> +PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan filter", >> +on ? "enable" : "disable"); >> +return ret; >> +} >> + >> /* Configure vlan stripping on or off */ int >> i40e_vsi_config_vlan_stripping(struct i40e_vsi *vsi, bool on) @@ -4203,9 >> +4238,11 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev) { >> struct rte_eth_dev_data *data = dev->data; >> int ret; >> +int mask = 0; >> >> /* Apply vlan offload setting */ >> -i40e_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK); >> +mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK; >> +i40e_vlan_offload_set(dev, mask); >> >> /* Apply double-vlan setting, not implemented yet */ >> >> diff --git a/drivers/net/i40e/i40e_ethdev.h >> b/drivers/net/i40e/i40e_ethdev.h index 1f9792b..5505d72 100644 >> --- a/drivers/net/i40e/i40e_ethdev.h >> +++ b/drivers/net/i40e/i40e_ethdev.h >> @@ -551,6 +551,7 @@ void i40e_vsi_queues_unbind_intr(struct i40e_vsi >> *vsi); int i40e_vsi_vlan_pvid_set(struct i40e_vsi *vsi, >> struct i40e_vsi_vlan_pvid_info *info); int >> i40e_vsi_config_vlan_stripping(struct i40e_vsi *vsi, bool on); >> +int i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on); >> uint64_t i40e_config_hena(uint64_t flags); uint64_t >> i40e_parse_hena(uint64_t flags); enum i40e_status_code >> i40e_fdir_setup_tx_resources(struct i40e_pf *pf); >> -- >> 2.1.4 -- Julien MEUNIER 6WIND
[dpdk-dev] [PATCH] i40e: configure MTU
Hello, On 04/23/2016 01:26 PM, Beilei Xing wrote: [...] > + /* mtu setting is forbidden if port is start */ > + if (dev_data->dev_started) { > + PMD_DRV_LOG(ERR, > + "port %d must be stopped before configuration\n", > + dev_data->port_id); > + return -EBUSY; > + } According to rte_ethdev.h, only 4 return codes are supported for rte_eth_dev_set_mtu: * - (0) if successful. * - (-ENOTSUP) if operation is not supported. * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if *mtu* invalid. EBUSY should not be returned. > + for (i = 0; i < dev_data->nb_rx_queues; i++) { > + rxq = dev_data->rx_queues[i]; > + if (!rxq || !rxq->q_set) > + continue; > + > + dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size; > + len = hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len; > + rxq->max_pkt_len = RTE_MIN(len, frame_size); > + } > + > + ret = i40e_dev_rx_init(pf); > + > + return ret; > +} > Why do want to reconfigure rxq here ? All these operations are already done when you call i40e_dev_rx_init. i40e_dev_rx_init => i40e_rx_queue_init (for each queue) => i40e_rx_queue_config => redefine rxq->max_pkt_len Moreover, you should move dev_data->dev_conf.rxmode.max_rx_pkt_len out of the loop. frame_size is the same for all rx_queues. -- Julien MEUNIER 6WIND
Re: [dpdk-dev] 18.11.6 (LTS) patches review and test
Hi, I launched UT on my target which has a QAT VF device, binded to igb_uio. + TestCase [97] : test_null_auth_only_operation failed + TestCase [99] : test_null_cipher_auth_operation failed When I did some debug, I saw that the content of the digest is 0. If I revert ac0a49ed9258 ("crypto/qat: fix null auth when using VFIO"), all tests are OK. This issue is not seen on master branch, because other UTs are executed for QAT PMDs in order to check NULL algo. UTs were a reworked, see af46a0bc0c5b ("test/crypto: add NULL algo to loop test mechanism") My commit does not seem to add any specific regression. Regards, On 08/01/2020 19:34, Kevin Traynor wrote: On 24/12/2019 10:07, Yu, PingX wrote: Kevin, Update the regression test result of Intel part. See the details as below. Hi Yu Ping, thanks for the report and the log files. # Basic Intel(R) NIC testing * PF(i40e): Pass * PF(ixgbe): Pass * VF: Pass * Build or compile: 2 bugs are found. 1. [dpdk-stable 18.11.6-rc1] meson build failed on FreeBSD12.1(See freebsd 12.1.log.txt) I have a fix for this and another FreeBSD+meson issue that was hidden by this. 2. [dpdk-stable 18.11.6-rc1] make build failed on fedora31.(See fedora31.log.txt) I have fixes for this and some other issues I found with clang 9.0 and gcc 9 on F31. * Intel NIC single core/NIC performance: Pass #Basic cryptodev and virtio testing * vhost/virtio basic loopback, PVP and performance test: Pass. * cryptodev: 2 bugs are found. 1. [dpdk-stable-18.11.6]Crypto: cryptodev_qat_autotest test failed. PS: issue passed on 18.11.3 and 18.11.5. Looking at commits related to crypto/qat I see: commit f7a7842ebec33c9cda3f5aac119adea4ce4f6999 Author: Hemant Agrawal Date: Wed Dec 18 10:15:27 2019 +0530 test/crypto: fix session init failure for wireless case [ upstream commit 2967612f44b9726cb14242ae61658f2c944188d2 ] commit 2674667aac56448c8bd151bc082e64ef4c88b649 Author: Arek Kusztal Date: Tue Oct 22 16:22:25 2019 +0200 crypto/qat: fix AES CMAC mininum digest size [ upstream commit a7f8087bbdbe9a69fdd0bbc77237dd3a2014ce71 ] commit ac0a49ed92588f961b1f5e659d27c70f078eea13 Author: Damian Nowak Date: Fri Aug 9 11:29:01 2019 +0200 crypto/qat: fix null auth when using VFIO [ upstream commit 65beb9abca6dbb2167a53ab31d79e03f0857357b ] commit cde0c9ce68d3a5975a57ef09a28252c44cfe4ac6 Author: Fiona Trahe Date: Tue Sep 10 17:32:10 2019 +0100 crypto/qat: fix digest length in XCBC capability [ upstream commit 0996ed0d5ad65b6419e3ce66a420199c3ed45ca9 ] commit 8db57afd7ab9a3c12d73f1f5461415690b8c173c Author: Julien Meunier Date: Wed Oct 16 13:21:11 2019 +0300 cryptodev: fix checks related to device id [ upstream commit 3dd4435cf473f5d10b99282098821fb40b72380f ] commit 8dec9eab6ac4eca67cb8df2dcdd5a09eaf86bc8e Author: Julien Meunier Date: Wed Aug 7 11:39:23 2019 +0300 cryptodev: fix initialization on multi-process [ upstream commit 1a60db7f354a52add0c1ea66e55ba7beba1a9716 ] 2. [dpdk-stable-18.11.6]Crypto: cryptodev_aesni_mb_autotest. Fail on 18.11.2~18.11.6 with latest configuration. As you can see from that, I don't think the UT were ever really stable and a lot of the stabilisation work came after 18.11. If the maintainers/authors (cc) want to investigate, I can take patches or revert if required. Otherwise, I won't investigate further or block the release on UT fails. thanks, Kevin. Regards, Yu Ping -Original Message- From: Kevin Traynor [mailto:ktray...@redhat.com] Sent: Wednesday, December 18, 2019 7:42 PM To: sta...@dpdk.org Cc: dev@dpdk.org; Abhishek Marathe ; Akhil Goyal ; Ali Alnubani ; Walker, Benjamin ; David Christensen ; Hemant Agrawal ; Stokes, Ian ; Jerin Jacob ; Mcnamara, John ; Ju-Hyoung Lee ; Kevin Traynor ; Luca Boccassi ; Pei Zhang ; Yu, PingX ; Xu, Qian Q ; Raslan Darawsheh ; Thomas Monjalon ; Peng, Yuan ; Chen, Zhaoyan Subject: 18.11.6 (LTS) patches review and test Hi all, Here is a list of patches targeted for LTS release 18.11.6. The planned date for the final release is 31st January. Please help with testing and validation of your use cases and report any issues/results with reply-all to this mail. For the final release the fixes and reported validations will be added to the release notes. A release candidate tarball can be found at: https://dpdk.org/browse/dpdk-stable/tag/?id=v18.11.6-rc1 These patches are located at branch 18.11 of dpdk-stable repo: https://dpdk.org/browse/dpdk-stable/ Thanks. Kevin. --- Aaron Conole (1): test/interrupt: account for race with callback Abhishek Sachan (1): net/af_packet: fix stale sockets Adrian Moreno (4): vhost: fix vring memory partially mapped vhost: translate incoming log address to GPA vhost: prevent zero copy mode if IOMMU is on vhost: convert buffer addresses to GPA for logging Ajit Khaparde (9): net/bnxt
[dpdk-dev] [PATCH] cryptodev: fix invalid dev_id after a pmd close
Each cryptodev are indexed with its dev_id in the global rte_crypto_devices variable. nb_devs is incremented / decremented each time a cryptodev is created / deleted. The goal of nb_devs is to prevent the user to get an invalid dev_id. Let's imagine DPDK has configured N cryptodevs. If the cryptodev=1 is removed at runtime, the latest cryptodev N cannot be accessible, because nb_devs=N-1. In order to prevent this kind of behavior, let's remove the check with nb_devs and iterate in all the rte_crypto_devices elements: if data is not NULL, that means a valid cryptodev is available. Fixes: d11b0f30df88 ("cryptodev: introduce API and framework for crypto devices") Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- lib/librte_cryptodev/rte_cryptodev.c | 42 1 file changed, 28 insertions(+), 14 deletions(-) diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c index 43bc335..c8c7ffd 100644 --- a/lib/librte_cryptodev/rte_cryptodev.c +++ b/lib/librte_cryptodev/rte_cryptodev.c @@ -49,9 +49,7 @@ struct rte_cryptodev *rte_cryptodevs = rte_crypto_devices; static struct rte_cryptodev_global cryptodev_globals = { .devs = rte_crypto_devices, - .data = { NULL }, - .nb_devs= 0, - .max_devs = RTE_CRYPTO_MAX_DEVS + .data = { NULL } }; /* spinlock for crypto device callbacks */ @@ -512,7 +510,7 @@ rte_cryptodev_pmd_get_named_dev(const char *name) if (name == NULL) return NULL; - for (i = 0; i < cryptodev_globals.max_devs; i++) { + for (i = 0; i < RTE_CRYPTO_MAX_DEVS; i++) { dev = &cryptodev_globals.devs[i]; if ((dev->attached == RTE_CRYPTODEV_ATTACHED) && @@ -523,12 +521,21 @@ rte_cryptodev_pmd_get_named_dev(const char *name) return NULL; } +static uint8_t +rte_cryptodev_is_valid_device_data(uint8_t dev_id) +{ + if (rte_crypto_devices[dev_id].data == NULL) + return 0; + + return 1; +} + unsigned int rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id) { struct rte_cryptodev *dev = NULL; - if (dev_id >= cryptodev_globals.nb_devs) + if (!rte_cryptodev_is_valid_device_data(dev_id)) return 0; dev = rte_cryptodev_pmd_get_dev(dev_id); @@ -547,12 +554,15 @@ rte_cryptodev_get_dev_id(const char *name) if (name == NULL) return -1; - for (i = 0; i < cryptodev_globals.nb_devs; i++) + for (i = 0; i < RTE_CRYPTO_MAX_DEVS; i++) { + if (!rte_cryptodev_is_valid_device_data(i)) + continue; if ((strcmp(cryptodev_globals.devs[i].data->name, name) == 0) && (cryptodev_globals.devs[i].attached == RTE_CRYPTODEV_ATTACHED)) return i; + } return -1; } @@ -560,7 +570,13 @@ rte_cryptodev_get_dev_id(const char *name) uint8_t rte_cryptodev_count(void) { - return cryptodev_globals.nb_devs; + uint8_t i, dev_count = 0; + + for (i = 0; i < RTE_CRYPTO_MAX_DEVS; i++) + if (cryptodev_globals.devs[i].data != NULL) + dev_count++; + + return dev_count; } uint8_t @@ -568,7 +584,7 @@ rte_cryptodev_device_count_by_driver(uint8_t driver_id) { uint8_t i, dev_count = 0; - for (i = 0; i < cryptodev_globals.max_devs; i++) + for (i = 0; i < RTE_CRYPTO_MAX_DEVS; i++) if (cryptodev_globals.devs[i].driver_id == driver_id && cryptodev_globals.devs[i].attached == RTE_CRYPTODEV_ATTACHED) @@ -583,9 +599,10 @@ rte_cryptodev_devices_get(const char *driver_name, uint8_t *devices, { uint8_t i, count = 0; struct rte_cryptodev *devs = cryptodev_globals.devs; - uint8_t max_devs = cryptodev_globals.max_devs; - for (i = 0; i < max_devs && count < nb_devices; i++) { + for (i = 0; i < RTE_CRYPTO_MAX_DEVS && count < nb_devices; i++) { + if (!rte_cryptodev_is_valid_device_data(i)) + continue; if (devs[i].attached == RTE_CRYPTODEV_ATTACHED) { int cmp; @@ -736,8 +753,6 @@ rte_cryptodev_pmd_allocate(const char *name, int socket_id) TAILQ_INIT(&(cryptodev->link_intr_cbs)); cryptodev->attached = RTE_CRYPTODEV_ATTACHED; - - cryptodev_globals.nb_devs++; } return cryptodev; @@ -766,7 +781,6 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev) return ret; cryptode
[dpdk-dev] [PATCH] cryptodev: fix pmd allocation on multi-process
Primary process is responsible to initialize the data struct of each crypto devices. Secondary process should not override this data during the initialization. Fixes: d11b0f30df88 ("cryptodev: introduce API and framework for crypto devices") Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- lib/librte_cryptodev/rte_cryptodev.c | 12 +++- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c index 43bc335..b16ef7b 100644 --- a/lib/librte_cryptodev/rte_cryptodev.c +++ b/lib/librte_cryptodev/rte_cryptodev.c @@ -725,12 +725,14 @@ rte_cryptodev_pmd_allocate(const char *name, int socket_id) cryptodev->data = *cryptodev_data; - strlcpy(cryptodev->data->name, name, - RTE_CRYPTODEV_NAME_MAX_LEN); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + strlcpy(cryptodev->data->name, name, + RTE_CRYPTODEV_NAME_MAX_LEN); - cryptodev->data->dev_id = dev_id; - cryptodev->data->socket_id = socket_id; - cryptodev->data->dev_started = 0; + cryptodev->data->dev_id = dev_id; + cryptodev->data->socket_id = socket_id; + cryptodev->data->dev_started = 0; + } /* init user callbacks */ TAILQ_INIT(&(cryptodev->link_intr_cbs)); -- 2.10.2
[dpdk-dev] [PATCH v2] cryptodev: fix check related to device id
Each cryptodev are indexed with dev_id in the global rte_crypto_devices variable. nb_devs is incremented / decremented each time a cryptodev is created / deleted. The goal of nb_devs was to prevent the user to get an invalid dev_id. Let's imagine DPDK has configured N cryptodevs. If the cryptodev=1 is removed at runtime, the latest cryptodev N cannot be accessible, because nb_devs=N-1 with the current implementaion. In order to prevent this kind of behavior, let's remove the check with nb_devs and iterate in all the rte_crypto_devices elements: if data is not NULL, that means a valid cryptodev is available. Also, remove max_devs field and use RTE_CRYPTO_MAX_DEVS in order to unify the code. Fixes: d11b0f30df88 ("cryptodev: introduce API and framework for crypto devices") Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- v2: * Restore nb_devs * Update headline (check-git-log.sh) * Update commit log lib/librte_cryptodev/rte_cryptodev.c | 30 +- 1 file changed, 21 insertions(+), 9 deletions(-) diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c index b16ef7b..933c38d 100644 --- a/lib/librte_cryptodev/rte_cryptodev.c +++ b/lib/librte_cryptodev/rte_cryptodev.c @@ -50,8 +50,7 @@ static struct rte_cryptodev_global cryptodev_globals = { .devs = rte_crypto_devices, .data = { NULL }, - .nb_devs= 0, - .max_devs = RTE_CRYPTO_MAX_DEVS + .nb_devs= 0 }; /* spinlock for crypto device callbacks */ @@ -512,7 +511,7 @@ struct rte_cryptodev * if (name == NULL) return NULL; - for (i = 0; i < cryptodev_globals.max_devs; i++) { + for (i = 0; i < RTE_CRYPTO_MAX_DEVS; i++) { dev = &cryptodev_globals.devs[i]; if ((dev->attached == RTE_CRYPTODEV_ATTACHED) && @@ -523,12 +522,21 @@ struct rte_cryptodev * return NULL; } +static uint8_t +rte_cryptodev_is_valid_device_data(uint8_t dev_id) +{ + if (rte_crypto_devices[dev_id].data == NULL) + return 0; + + return 1; +} + unsigned int rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id) { struct rte_cryptodev *dev = NULL; - if (dev_id >= cryptodev_globals.nb_devs) + if (!rte_cryptodev_is_valid_device_data(dev_id)) return 0; dev = rte_cryptodev_pmd_get_dev(dev_id); @@ -547,12 +555,15 @@ struct rte_cryptodev * if (name == NULL) return -1; - for (i = 0; i < cryptodev_globals.nb_devs; i++) + for (i = 0; i < RTE_CRYPTO_MAX_DEVS; i++) { + if (!rte_cryptodev_is_valid_device_data(i)) + continue; if ((strcmp(cryptodev_globals.devs[i].data->name, name) == 0) && (cryptodev_globals.devs[i].attached == RTE_CRYPTODEV_ATTACHED)) return i; + } return -1; } @@ -568,7 +579,7 @@ struct rte_cryptodev * { uint8_t i, dev_count = 0; - for (i = 0; i < cryptodev_globals.max_devs; i++) + for (i = 0; i < RTE_CRYPTO_MAX_DEVS; i++) if (cryptodev_globals.devs[i].driver_id == driver_id && cryptodev_globals.devs[i].attached == RTE_CRYPTODEV_ATTACHED) @@ -583,9 +594,10 @@ struct rte_cryptodev * { uint8_t i, count = 0; struct rte_cryptodev *devs = cryptodev_globals.devs; - uint8_t max_devs = cryptodev_globals.max_devs; - for (i = 0; i < max_devs && count < nb_devices; i++) { + for (i = 0; i < RTE_CRYPTO_MAX_DEVS && count < nb_devices; i++) { + if (!rte_cryptodev_is_valid_device_data(i)) + continue; if (devs[i].attached == RTE_CRYPTODEV_ATTACHED) { int cmp; @@ -1101,7 +1113,7 @@ struct rte_cryptodev * { struct rte_cryptodev *dev; - if (dev_id >= cryptodev_globals.nb_devs) { + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) { CDEV_LOG_ERR("Invalid dev_id=%d", dev_id); return; } -- 1.8.3.1
[dpdk-dev] [PATCH v3] cryptodev: fix check related to device id
Each cryptodev are indexed with dev_id in the global rte_crypto_devices variable. nb_devs is incremented / decremented each time a cryptodev is created / deleted. The goal of nb_devs was to prevent the user to get an invalid dev_id. Let's imagine DPDK has configured N cryptodevs. If the cryptodev=1 is removed at runtime, the latest cryptodev N cannot be accessible, because nb_devs=N-1 with the current implementaion. In order to prevent this kind of behavior, let's remove the check with nb_devs and iterate in all the rte_crypto_devices elements: if data is not NULL, that means a valid cryptodev is available. Also, remove max_devs field and use RTE_CRYPTO_MAX_DEVS in order to unify the code. Fixes: d11b0f30df88 ("cryptodev: introduce API and framework for crypto devices") Cc: sta...@dpdk.org Signed-off-by: Julien Meunier --- v3: * Set rte_cryptodev_is_valid_device_data as inline * Remove max_devs in rte_cryptodev_global v2: * Restore nb_devs * Update headline (check-git-log.sh) * Update commit log lib/librte_cryptodev/rte_cryptodev.c | 30 +- lib/librte_cryptodev/rte_cryptodev_pmd.h | 1 - 2 files changed, 21 insertions(+), 10 deletions(-) diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c index b16ef7b..89aa2ed 100644 --- a/lib/librte_cryptodev/rte_cryptodev.c +++ b/lib/librte_cryptodev/rte_cryptodev.c @@ -50,8 +50,7 @@ struct rte_cryptodev *rte_cryptodevs = rte_crypto_devices; static struct rte_cryptodev_global cryptodev_globals = { .devs = rte_crypto_devices, .data = { NULL }, - .nb_devs= 0, - .max_devs = RTE_CRYPTO_MAX_DEVS + .nb_devs= 0 }; /* spinlock for crypto device callbacks */ @@ -512,7 +511,7 @@ rte_cryptodev_pmd_get_named_dev(const char *name) if (name == NULL) return NULL; - for (i = 0; i < cryptodev_globals.max_devs; i++) { + for (i = 0; i < RTE_CRYPTO_MAX_DEVS; i++) { dev = &cryptodev_globals.devs[i]; if ((dev->attached == RTE_CRYPTODEV_ATTACHED) && @@ -523,12 +522,21 @@ rte_cryptodev_pmd_get_named_dev(const char *name) return NULL; } +static inline uint8_t +rte_cryptodev_is_valid_device_data(uint8_t dev_id) +{ + if (rte_crypto_devices[dev_id].data == NULL) + return 0; + + return 1; +} + unsigned int rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id) { struct rte_cryptodev *dev = NULL; - if (dev_id >= cryptodev_globals.nb_devs) + if (!rte_cryptodev_is_valid_device_data(dev_id)) return 0; dev = rte_cryptodev_pmd_get_dev(dev_id); @@ -547,12 +555,15 @@ rte_cryptodev_get_dev_id(const char *name) if (name == NULL) return -1; - for (i = 0; i < cryptodev_globals.nb_devs; i++) + for (i = 0; i < RTE_CRYPTO_MAX_DEVS; i++) { + if (!rte_cryptodev_is_valid_device_data(i)) + continue; if ((strcmp(cryptodev_globals.devs[i].data->name, name) == 0) && (cryptodev_globals.devs[i].attached == RTE_CRYPTODEV_ATTACHED)) return i; + } return -1; } @@ -568,7 +579,7 @@ rte_cryptodev_device_count_by_driver(uint8_t driver_id) { uint8_t i, dev_count = 0; - for (i = 0; i < cryptodev_globals.max_devs; i++) + for (i = 0; i < RTE_CRYPTO_MAX_DEVS; i++) if (cryptodev_globals.devs[i].driver_id == driver_id && cryptodev_globals.devs[i].attached == RTE_CRYPTODEV_ATTACHED) @@ -583,9 +594,10 @@ rte_cryptodev_devices_get(const char *driver_name, uint8_t *devices, { uint8_t i, count = 0; struct rte_cryptodev *devs = cryptodev_globals.devs; - uint8_t max_devs = cryptodev_globals.max_devs; - for (i = 0; i < max_devs && count < nb_devices; i++) { + for (i = 0; i < RTE_CRYPTO_MAX_DEVS && count < nb_devices; i++) { + if (!rte_cryptodev_is_valid_device_data(i)) + continue; if (devs[i].attached == RTE_CRYPTODEV_ATTACHED) { int cmp; @@ -1101,7 +1113,7 @@ rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info) { struct rte_cryptodev *dev; - if (dev_id >= cryptodev_globals.nb_devs) { + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) { CDEV_LOG_ERR("Invalid dev_id=%d", dev_id); return; } diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h index defe05e..fba14f2
[dpdk-dev] [PATCH v2] i40e: fix vlan filtering
VLAN filtering was always performed, even if hw_vlan_filter was disabled. During device initialization, default filter RTE_MACVLAN_PERFECT_MATCH was applied. In this situation, all incoming VLAN frames were dropped by the card (increase of the register RUPP - Rx Unsupported Protocol). In order to restore default behavior, if HW VLAN filtering is activated, set a filter to match MAC and VLAN. If not, set a filter to only match MAC. Signed-off-by: Julien Meunier --- Changes since v1: - use ether_addr_copy() for mac copy - add more debug messages in case of failure - update all existing filters when multiple mac addresses have been configured - when adding new mac address, use correct filter TODO: - i40e_update_default_filter_setting always forces to RTE_MACVLAN_PERFECT_MATCH. => The type of filter should be changed according to vlan filter setting. - What happens if vlan filter setting changes when various filters are already set like RTE_MACVLAN_PERFECT_MATCH, RTE_MACVLAN_PERFECT_MATCH, RTE_MAC_HASH_MATCH, RTE_MACVLAN_HASH_MATCH ? => With testpmd, it is possible to add manually these filters. But when changing vlan filter setting, all previous filter set manually are overriden. --- drivers/net/i40e/i40e_ethdev.c | 73 -- drivers/net/i40e/i40e_ethdev.h | 1 + 2 files changed, 72 insertions(+), 2 deletions(-) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index bf6220d..64d6ada 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -2332,6 +2332,13 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask) struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct i40e_vsi *vsi = pf->main_vsi; + if (mask & ETH_VLAN_FILTER_MASK) { + if (dev->data->dev_conf.rxmode.hw_vlan_filter) + i40e_vsi_config_vlan_filter(vsi, TRUE); + else + i40e_vsi_config_vlan_filter(vsi, FALSE); + } + if (mask & ETH_VLAN_STRIP_MASK) { /* Enable or disable VLAN stripping */ if (dev->data->dev_conf.rxmode.hw_vlan_strip) @@ -2583,7 +2590,10 @@ i40e_macaddr_add(struct rte_eth_dev *dev, } (void)rte_memcpy(&mac_filter.mac_addr, mac_addr, ETHER_ADDR_LEN); - mac_filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; + if (dev->data->dev_conf.rxmode.hw_vlan_filter) + mac_filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; + else + mac_filter.filter_type = RTE_MAC_PERFECT_MATCH; if (pool == 0) vsi = pf->main_vsi; @@ -4156,6 +4166,63 @@ fail_mem: return NULL; } +/* Configure vlan filter on or off */ +int +i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on) +{ + int i, num; + struct i40e_mac_filter *f; + struct i40e_mac_filter_info *mac_filter; + enum rte_mac_filter_type desired_filter; + int ret = I40E_SUCCESS; + + if (on) { + /* Filter to match MAC and VLAN */ + desired_filter = RTE_MACVLAN_PERFECT_MATCH; + } else { + /* Filter to match only MAC */ + desired_filter = RTE_MAC_PERFECT_MATCH; + } + + num = vsi->mac_num; + + mac_filter = rte_zmalloc("mac_filter_info_data", +num * sizeof(*mac_filter), 0); + if (mac_filter == NULL) { + PMD_DRV_LOG(ERR, "failed to allocate memory"); + return I40E_ERR_NO_MEMORY; + } + + i = 0; + + /* Remove all existing mac */ + TAILQ_FOREACH(f, &vsi->mac_list, next) { + mac_filter[i] = f->mac_info; + ret = i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr); + if (ret) { + PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan filter", + on ? "enable" : "disable"); + goto DONE; + } + i++; + } + + /* Override with new filter */ + for (i = 0; i < num; i++) { + mac_filter[i].filter_type = desired_filter; + ret = i40e_vsi_add_mac(vsi, &mac_filter[i]); + if (ret) { + PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan filter", + on ? "enable" : "disable"); + goto DONE; + } + } + +DONE: + rte_free(mac_filter); + return ret; +} + /* Configure vlan stripping on or off */ int i40e_vsi_config_vlan_stripping(struct i40e_vsi *vsi, bool on) @@ -4203,9 +4270,11 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev) { struct rte_eth_dev_data *data = dev->data; int ret; + int mask = 0
[dpdk-dev] [PATCH v2] i40e: fix vlan filtering
Hello, INFO log level is used in order to keep code homogeneity: i40e_vsi_config_vlan_stripping or i40e_dev_init_vlan use this log level during failure for example. Tell me if ERR log level for VLAN filtering issue must be set. On 02/03/2016 02:15 AM, Zhang, Helin wrote: > >> -Original Message- >> From: Julien Meunier [mailto:julien.meunier at 6wind.com] >> Sent: Tuesday, February 2, 2016 9:51 PM >> To: Zhang, Helin >> Cc: dev at dpdk.org >> Subject: [PATCH v2] i40e: fix vlan filtering >> >> VLAN filtering was always performed, even if hw_vlan_filter was disabled. >> During device initialization, default filter RTE_MACVLAN_PERFECT_MATCH >> was applied. In this situation, all incoming VLAN frames were dropped by the >> card (increase of the register RUPP - Rx Unsupported Protocol). >> >> In order to restore default behavior, if HW VLAN filtering is activated, set >> a >> filter to match MAC and VLAN. If not, set a filter to only match MAC. >> >> Signed-off-by: Julien Meunier >> --- >> Changes since v1: >> - use ether_addr_copy() for mac copy >> - add more debug messages in case of failure >> - update all existing filters when multiple mac addresses have been >> configured >> - when adding new mac address, use correct filter >> >> TODO: >> - i40e_update_default_filter_setting always forces to >>RTE_MACVLAN_PERFECT_MATCH. >>=> The type of filter should be changed according to vlan filter setting. >> >> - What happens if vlan filter setting changes when various filters are >> already >>set like RTE_MACVLAN_PERFECT_MATCH, >> RTE_MACVLAN_PERFECT_MATCH, >>RTE_MAC_HASH_MATCH, RTE_MACVLAN_HASH_MATCH ? >>=> With testpmd, it is possible to add manually these filters. But when >>changing vlan filter setting, all previous filter set manually are >> overriden. >> --- >> drivers/net/i40e/i40e_ethdev.c | 73 >> -- >> drivers/net/i40e/i40e_ethdev.h | 1 + >> 2 files changed, 72 insertions(+), 2 deletions(-) >> >> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c >> index bf6220d..64d6ada 100644 >> --- a/drivers/net/i40e/i40e_ethdev.c >> +++ b/drivers/net/i40e/i40e_ethdev.c >> @@ -2332,6 +2332,13 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int >> mask) >> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); >> struct i40e_vsi *vsi = pf->main_vsi; >> >> +if (mask & ETH_VLAN_FILTER_MASK) { >> +if (dev->data->dev_conf.rxmode.hw_vlan_filter) >> +i40e_vsi_config_vlan_filter(vsi, TRUE); >> +else >> +i40e_vsi_config_vlan_filter(vsi, FALSE); >> +} >> + >> if (mask & ETH_VLAN_STRIP_MASK) { >> /* Enable or disable VLAN stripping */ >> if (dev->data->dev_conf.rxmode.hw_vlan_strip) >> @@ -2583,7 +2590,10 @@ i40e_macaddr_add(struct rte_eth_dev *dev, >> } >> >> (void)rte_memcpy(&mac_filter.mac_addr, mac_addr, ETHER_ADDR_LEN); >> -mac_filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; >> +if (dev->data->dev_conf.rxmode.hw_vlan_filter) >> +mac_filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; >> +else >> +mac_filter.filter_type = RTE_MAC_PERFECT_MATCH; >> >> if (pool == 0) >> vsi = pf->main_vsi; >> @@ -4156,6 +4166,63 @@ fail_mem: >> return NULL; >> } >> >> +/* Configure vlan filter on or off */ >> +int >> +i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on) { >> +int i, num; >> +struct i40e_mac_filter *f; >> +struct i40e_mac_filter_info *mac_filter; >> +enum rte_mac_filter_type desired_filter; >> +int ret = I40E_SUCCESS; >> + >> +if (on) { >> +/* Filter to match MAC and VLAN */ >> +desired_filter = RTE_MACVLAN_PERFECT_MATCH; >> +} else { >> +/* Filter to match only MAC */ >> +desired_filter = RTE_MAC_PERFECT_MATCH; >> +} >> + >> +num = vsi->mac_num; >> + >> +mac_filter = rte_zmalloc("mac_filter_info_data", >> + num * sizeof(*mac_filter), 0); >> +if (mac_filter == NULL) { >> +PMD_DRV_LOG(ERR, "failed to allocate memory"); >> +return I40E_ERR_NO_MEMORY; >> +} >> + >>
[dpdk-dev] [PATCH v3] i40e: fix vlan filtering
VLAN filtering was always performed, even if hw_vlan_filter was disabled. During device initialization, default filter RTE_MACVLAN_PERFECT_MATCH was applied. In this situation, all incoming VLAN frames were dropped by the card (increase of the register RUPP - Rx Unsupported Protocol). In order to restore default behavior, if HW VLAN filtering is activated, set a filter to match MAC and VLAN. If not, set a filter to only match MAC. Signed-off-by: Julien Meunier --- Changes since v2: - switch log level from INFO to ERR in case of failure Changes since v1: - use ether_addr_copy() for mac copy - add more debug messages in case of failure - update all existing filters when multiple mac addresses have been configured - when adding new mac address, use correct filter TODO: - i40e_update_default_filter_setting always forces to RTE_MACVLAN_PERFECT_MATCH. => The type of filter should be changed according to vlan filter setting. - What happens if vlan filter setting changes when various filters are already set like RTE_MACVLAN_PERFECT_MATCH, RTE_MACVLAN_PERFECT_MATCH, RTE_MAC_HASH_MATCH, RTE_MACVLAN_HASH_MATCH ? => With testpmd, it is possible to add manually these filters. But when changing vlan filter setting, all previous filter set manually are overriden. --- drivers/net/i40e/i40e_ethdev.c | 73 -- drivers/net/i40e/i40e_ethdev.h | 1 + 2 files changed, 72 insertions(+), 2 deletions(-) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index bf6220d..750206b 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -2332,6 +2332,13 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask) struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct i40e_vsi *vsi = pf->main_vsi; + if (mask & ETH_VLAN_FILTER_MASK) { + if (dev->data->dev_conf.rxmode.hw_vlan_filter) + i40e_vsi_config_vlan_filter(vsi, TRUE); + else + i40e_vsi_config_vlan_filter(vsi, FALSE); + } + if (mask & ETH_VLAN_STRIP_MASK) { /* Enable or disable VLAN stripping */ if (dev->data->dev_conf.rxmode.hw_vlan_strip) @@ -2583,7 +2590,10 @@ i40e_macaddr_add(struct rte_eth_dev *dev, } (void)rte_memcpy(&mac_filter.mac_addr, mac_addr, ETHER_ADDR_LEN); - mac_filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; + if (dev->data->dev_conf.rxmode.hw_vlan_filter) + mac_filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; + else + mac_filter.filter_type = RTE_MAC_PERFECT_MATCH; if (pool == 0) vsi = pf->main_vsi; @@ -4156,6 +4166,63 @@ fail_mem: return NULL; } +/* Configure vlan filter on or off */ +int +i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on) +{ + int i, num; + struct i40e_mac_filter *f; + struct i40e_mac_filter_info *mac_filter; + enum rte_mac_filter_type desired_filter; + int ret = I40E_SUCCESS; + + if (on) { + /* Filter to match MAC and VLAN */ + desired_filter = RTE_MACVLAN_PERFECT_MATCH; + } else { + /* Filter to match only MAC */ + desired_filter = RTE_MAC_PERFECT_MATCH; + } + + num = vsi->mac_num; + + mac_filter = rte_zmalloc("mac_filter_info_data", +num * sizeof(*mac_filter), 0); + if (mac_filter == NULL) { + PMD_DRV_LOG(ERR, "failed to allocate memory"); + return I40E_ERR_NO_MEMORY; + } + + i = 0; + + /* Remove all existing mac */ + TAILQ_FOREACH(f, &vsi->mac_list, next) { + mac_filter[i] = f->mac_info; + ret = i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr); + if (ret) { + PMD_DRV_LOG(ERR, "Update VSI failed to %s vlan filter", + on ? "enable" : "disable"); + goto DONE; + } + i++; + } + + /* Override with new filter */ + for (i = 0; i < num; i++) { + mac_filter[i].filter_type = desired_filter; + ret = i40e_vsi_add_mac(vsi, &mac_filter[i]); + if (ret) { + PMD_DRV_LOG(ERR, "Update VSI failed to %s vlan filter", + on ? "enable" : "disable"); + goto DONE; + } + } + +DONE: + rte_free(mac_filter); + return ret; +} + /* Configure vlan stripping on or off */ int i40e_vsi_config_vlan_stripping(struct i40e_vsi *vsi, bool on) @@ -4203,9 +4270,11 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev) { struct rte_eth_dev_data
[dpdk-dev] i40e: cannot change mtu to enable jumbo frame
-- + Accumulated forward statistics for all ports+ RX-packets: 1 RX-dropped: 0 RX-total: 1 TX-packets: 0 TX-dropped: 0 TX-total: 0 Done. => Frame correctly received on port 0, but never forwarded or xmit on port 1. Does a mtu_set function will be developed soon in order to support jumbo frame ? Regards, -- Julien MEUNIER 6WIND
[dpdk-dev] i40e: cannot change mtu to enable jumbo frame
On 02/09/2016 08:05 PM, Zhu, Heqing wrote: > Helin is still in Chinese New Year Vacation. Will the below command option > help ? > > 4.5.9. port config - max-pkt-len > Set the maximum packet length: > > testpmd> port config all max-pkt-len (value) > This is equivalent to the --max-pkt-len command-line option. > > > -Original Message- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Julien Meunier > Sent: Tuesday, February 9, 2016 9:36 AM > To: Zhang, Helin ; dev at dpdk.org > Subject: [dpdk-dev] i40e: cannot change mtu to enable jumbo frame > > Hello Helin, > > I tried to send jumbo frames to a i40e card. However, I observed that all > frames are dropped. Moreover, set_mtu function is not implemented on i40e PMD. > > > testpmd --log-level 8 --huge-dir=/mnt/huge -n 4 -l 2,18 --socket-mem > 1024,1024 -w :02:00.0 -w :02:00.2 -- -i --nb-cores=1 > --nb-ports=2 --total-num-mbufs=65536 > > = > Configuration > = > > +---+ +-+ > | | | | > | tgen | | | > | +--+ port 0 | > | | | | > | | | | > | | | | > | | | | > | +--+ port 1 | > | | | | > +---+ +-+ > > DPDK: DPDK-v2.2 > > == > MTU = 1500 > == > Packet sent from a tgen > > p = Ether / IP / UDP / Raw(MTU + HDR(Ethernet)- HDR(IP) - HDR(UDP)) > > len(p) = 1514 > > testpmd> start > PMD: i40e_rxd_to_vlan_tci(): Mbuf vlan_tci: 0, vlan_tci_outer: 0 > testpmd> stop > Telling cores to stop... > Waiting for lcores to finish... > PMD: i40e_update_vsi_stats(): *** VSI[13] stats start *** > PMD: i40e_update_vsi_stats(): rx_bytes:1518 > PMD: i40e_update_vsi_stats(): rx_unicast: 1 > PMD: i40e_update_vsi_stats(): *** VSI[13] stats end *** > PMD: i40e_dev_stats_get(): *** PF stats start *** > PMD: i40e_dev_stats_get(): rx_bytes:1514 > PMD: i40e_dev_stats_get(): rx_unicast: 1 > PMD: i40e_dev_stats_get(): rx_unknown_protocol: 1 > PMD: i40e_dev_stats_get(): rx_size_1522: 1 > PMD: i40e_dev_stats_get(): *** PF stats end *** > > -- Forward statistics for port 0 -- > RX-packets: 1 RX-dropped: 0 RX-total: 1 > TX-packets: 0 TX-dropped: 0 TX-total: 0 > > > PMD: i40e_update_vsi_stats(): *** VSI[14] stats start *** > PMD: i40e_update_vsi_stats(): tx_bytes:1514 > PMD: i40e_update_vsi_stats(): tx_unicast: 1 > PMD: i40e_update_vsi_stats(): *** VSI[14] stats end *** > PMD: i40e_dev_stats_get(): *** PF stats start *** > PMD: i40e_dev_stats_get(): tx_bytes:1514 > PMD: i40e_dev_stats_get(): tx_unicast: 1 > PMD: i40e_dev_stats_get(): tx_size_1522: 1 > PMD: i40e_dev_stats_get(): *** PF stats end *** > > -- Forward statistics for port 1 -- > RX-packets: 0 RX-dropped: 0 RX-total: 0 > TX-packets: 1 TX-dropped: 0 TX-total: 1 > > > > + Accumulated forward statistics for all ports+ > RX-packets: 1 RX-dropped: 0 RX-total: 1 > TX-packets: 1 TX-dropped: 0 TX-total: 1 > > > > => OK > > == > MTU = 1600 > == > Packet sent > > p = Ether / IP / UDP / Raw(MTU + HDR(Ethernet)- HDR(IP) - HDR(UDP)) > > len(p) = 1614 > > testpmd> port config mtu 0 1600 > rte_eth_dev_set_mtu: Function not supported Set MTU failed. diag=-95 > testpmd> port config mtu 1 1600 > rte_eth_dev_set_mtu: Function not supported Set MTU failed. diag=-95 > testpmd> start > testpmd> stop > Telling cores to stop... > Waiting for lcores to finish... > PMD: i40e_update_vsi_stats(): *** VSI[13] stats start *** > PMD: i40e_update_vsi_stats(): rx_bytes:1618 > PMD: i40e_update_vsi_stats(): rx_unicast: 1 > PMD: i40e_update_vsi_stats(): *** VSI[13] stats end *** > PMD: i40e_dev_stats_get(): *** PF stats start *** > PMD: i40e_dev_stats_get(): rx_bytes:1614 > PMD: i40e_dev_stats_get(): rx_unicast: 1 > PMD: i40e_dev_stats_get(): rx_unknown_protocol: 1 > PMD: i
[dpdk-dev] i40e: cannot change mtu to enable jumbo frame
On 02/10/2016 04:20 PM, Zhang, Helin wrote: > > >> -Original Message----- >> From: Julien Meunier [mailto:julien.meunier at 6wind.com] >> Sent: Wednesday, February 10, 2016 12:36 AM >> To: Zhang, Helin ; dev at dpdk.org >> Subject: i40e: cannot change mtu to enable jumbo frame >> [...] >> Does a mtu_set function will be developed soon in order to support jumbo >> frame ? > Yes, we will implement soon later. > Could you help to try with max_pkt_len in port config for now? > > Regards, > Helin > Hi, When I stop ports, change max_pkt_len and restart ports, jumbo frame are accepted. I was able to forward 9k frames on a i40e card. I wrote a quick and dirty patch in order to add minimal support of MTU for my test. I did not carefully study the impacts... Please advice. --- i40e: add support of mtu configuration Add support of MTU configuration. Ports are stopped and then started in order to force re-initialization of RX queues. NOTE: This patch is still experimental. Signed-off-by: Julien Meunier --- drivers/net/i40e/i40e_ethdev.c | 33 + 1 file changed, 33 insertions(+) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 750206b..b4d6912 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -296,6 +296,7 @@ static int i40e_dev_queue_stats_mapping_set(struct rte_eth_dev *dev, uint8_t is_rx); static void i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); +static int i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); static int i40e_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); @@ -439,6 +440,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = { .xstats_reset = i40e_dev_stats_reset, .queue_stats_mapping_set = i40e_dev_queue_stats_mapping_set, .dev_infos_get= i40e_dev_info_get, + .mtu_set = i40e_dev_mtu_set, .vlan_filter_set = i40e_vlan_filter_set, .vlan_tpid_set= i40e_vlan_tpid_set, .vlan_offload_set = i40e_vlan_offload_set, @@ -4681,6 +4683,37 @@ i40e_dev_rxtx_init(struct i40e_pf *pf) } static int +i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) +{ + struct rte_eth_dev_info dev_info; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN; + + i40e_dev_info_get(dev, &dev_info); + + if ((frame_size < ETHER_MIN_MTU) || (frame_size > dev_info.max_rx_pktlen)) { + PMD_DRV_LOG(ERR, "Invalid MTU\n"); + return I40E_ERR_PARAM; + } + + i40e_dev_stop(dev); + hw->adapter_stopped = 1; + + /* switch to jumbo mode if needed */ + if (frame_size > ETHER_MAX_LEN) + dev->data->dev_conf.rxmode.jumbo_frame = 1; + else + dev->data->dev_conf.rxmode.jumbo_frame = 0; + + dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; + + i40e_dev_start(dev); + hw->adapter_stopped = 0; + + return 0; +} + +static int i40e_vmdq_setup(struct rte_eth_dev *dev) { struct rte_eth_conf *conf = &dev->data->dev_conf; -- Regards, -- Julien MEUNIER 6WIND