Re: [dpdk-dev] [dpdk-users] DPDK16.11.2 LTS: i40e PMD fails to start X710 port
Hi all, I'm reposting the following bug report on the dev mailing list. I was wondering if this is a known issue or not. In our application code, the only way to overcome the problem is to introduce a delay after each rte_eth_dev_stop/start call when using X710 cards. However, this seems like a hacky temporary fix as it increases the "start-up" time unnecessarily. Thanks, Dumitru On Wed, Jul 19, 2017 at 3:49 PM, Dumitru Ceara wrote: > Hi all, > > With DPDK 16.11.2 LTS we see "intermittent" errors when trying to > start an X710 interface (latest firmware - FW 5.05). > > Due to the fact that the i40e PMD requires the port to be stopped when > setting MTU we use the following sequence in our code: > 1. rte_eth_dev_ stop > 2. rte_eth_dev_set_mtu > 3. rte_eth_dev_start > > If rte_eth_dev_start is called shortly after stop (as is in our case), > the i40e_phy_conf_link sometimes fails and returns -ENOTSUP. This > happens because i40e_aq_get_phy_capabilities returns status > I40E_ERR_UNKNOWN_PHY. I double checked and the transceivers we use are > Intel FTLX8571D3BCV-IT. > > Moreover, if we introduce a delay (e.g., 10 seconds) after > rte_eth_dev_stop then rte_eth_dev_start works fine. > > In order to eliminate potential issues with our code I tried to > replicate with test-pmd with "disable-link-check" in order to avoid > the delay introduced by link state checking: > > # $RTE_SDK/tools/dpdk-devbind.py -s > > Network devices using DPDK-compatible driver > > :82:00.0 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused= > :82:00.1 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused= > > # $RTE_SDK/x86_64-native-linuxapp-gcc/build/app/test-pmd/testpmd -w > :82:00.0 -w :82:00.1 -- --disable-link-check -i > EAL: Detected 32 lcore(s) > EAL: Probing VFIO support... > EAL: PCI device :82:00.0 on NUMA socket 1 > EAL: probe driver: 8086:1572 net_i40e > PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.05 eetrack 8000289d > EAL: PCI device :82:00.1 on NUMA socket 1 > EAL: probe driver: 8086:1572 net_i40e > PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.05 eetrack 8000289d > Interactive-mode selected > USER1: create a new mbuf pool : n=395456, > size=2176, socket=0 > Configuring Port 0 (socket 0) > Port 0: 3C:FD:FE:9C:79:F0 > Configuring Port 1 (socket 0) > Port 1: 3C:FD:FE:9C:79:F1 > Done > testpmd> port stop 1 > Stopping ports... > Done > testpmd> port start 1 > Port 1: 3C:FD:FE:9C:79:F1 > Done > testpmd> port stop 1 > Stopping ports... > Done > testpmd> port start 1 > Fail to start port 1 > Please stop the ports first<<<<<<< Here we fail to stop port 1 > Done > > I saw there were some more reports of similar I40E_ERR_UNKNOWN_PHY > issues but I'm not sure if they were related to the device stop/start > sequence. Is this a known limitation of the driver/firmware? Is the > application supposed to deal with this specific behavior of the X710? > If so, how would an application detect that the rte_eth_dev_stop > operation has completed? Or are we missing some hardware specific > initialization? > > Thanks, > Dumitru -- Dumitru Ceara
Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support.
On 9/22/20 4:21 PM, Ferruh Yigit wrote: > On 9/18/2020 11:36 AM, Dumitru Ceara wrote: >> Even though ring interfaces don't support any other TX/RX offloads they >> do support sending multi segment packets and this should be advertised >> in order to not break applications that use ring interfaces. >> > > Does ring PMD support sending multi segmented packets? > Yes, sending multi segmented packets works fine with ring PMD. > As far as I can see ring PMD doesn't know about the mbuf segments. > Right, the PMD doesn't care about the mbuf segments but it implicitly supports sending multi segmented packets. From what I see it's actually the case for most of the PMDs, in the sense that most don't even check the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi segment packets they are just accepted. However, the fact that the ring PMD doesn't advertise this implicit support forces applications that use ring PMD to have a special case for handling ring interfaces. If the ring PMD would advertise DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious to the type of underlying interface. Thanks, Dumitru
Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support.
On 9/28/20 3:26 PM, Ferruh Yigit wrote: > On 9/28/2020 2:10 PM, Ananyev, Konstantin wrote: >> >> >>> -Original Message- >>> From: Ferruh Yigit >>> Sent: Monday, September 28, 2020 1:43 PM >>> To: Ananyev, Konstantin ; Dumitru Ceara >>> ; dev@dpdk.org >>> Cc: Richardson, Bruce >>> Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment >>> support. >>> >>> On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote: >>>>> On 9/28/2020 8:31 AM, Dumitru Ceara wrote: >>>>>> On 9/22/20 4:21 PM, Ferruh Yigit wrote: >>>>>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote: >>>>>>>> Even though ring interfaces don't support any other TX/RX >>>>>>>> offloads they >>>>>>>> do support sending multi segment packets and this should be >>>>>>>> advertised >>>>>>>> in order to not break applications that use ring interfaces. >>>>>>>> >>>>>>> >>>>>>> Does ring PMD support sending multi segmented packets? >>>>>>> >>>>>> >>>>>> Yes, sending multi segmented packets works fine with ring PMD. >>>>>> >>>>> >>>>> Define "works fine" :) >>>>> >>>>> All PMDs can put the first mbuf of the chained mbuf to the ring, in >>>>> that case >>>>> what is the difference between the ones supports >>>>> 'DEV_TX_OFFLOAD_MULTI_SEGS' and >>>>> the ones doesn't support? >>>>> >>>>> If the traffic is only from ring PMD to ring PMD, you won't >>>>> recognize the >>>>> difference between segmented or not-segmented mbufs, and it will >>>>> look like >>>>> segmented packets works fine. >>>>> But if there is other PMDs involved in the forwarding, or if need >>>>> to process the >>>>> packets, will it still work fine? >>>>> >>>>>>> As far as I can see ring PMD doesn't know about the mbuf segments. >>>>>>> >>>>>> >>>>>> Right, the PMD doesn't care about the mbuf segments but it implicitly >>>>>> supports sending multi segmented packets. From what I see it's >>>>>> actually >>>>>> the case for most of the PMDs, in the sense that most don't even >>>>>> check >>>>>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi >>>>>> segment packets they are just accepted. >>>>> > >>>>> >>>>> As far as I can see, if the segmented packets sent, the ring PMD >>>>> will put the >>>>> first mbuf into the ring without doing anything specific to the >>>>> next segments. >>>>> >>>>> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should >>>>> detect the >>>>> segmented packets and put each chained mbuf into the separate field >>>>> in the ring. >>>> >>>> Hmm, wonder why do you think this is necessary? >>>> From my perspective current behaviour is sufficient for TX-ing >>>> multi-seg packets >>>> over the ring. >>>> >>> >>> I was thinking based on what some PMDs already doing, but right ring >>> may not >>> need to do it. >>> >>> Also for the case, one application is sending multi segmented packets >>> to the >>> ring, and other application pulling packets from the ring and sending >>> to a PMD >>> that does NOT support the multi-seg TX. I thought ring PMD claiming the >>> multi-seg Tx support should serialize packets to support this case, >>> but instead >>> ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing >>> the >>> responsibility to the application. >>> >>> So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' & >>> 'DEV_RX_OFFLOAD_SCATTER', what do you think? >> >> Seems so... >> Another question - should we allow DEV_TX_OFFLOAD_MULTI_SEGS here, >> if DEV_RX_OFFLOAD_SCATTER was not specified? >> > > I think better to have a new version of the patch to claim both > capabilities together. > OK, I can do that and send a v2 to claim both caps together. Just so that it's clear to me though, these capabilities will only be advertised and the current behavior of the ring PMD at tx/rx will remain unchanged, right? Thanks, Dumitru >> >>> >>>>> >>>>>> >>>>>> However, the fact that the ring PMD doesn't advertise this implicit >>>>>> support forces applications that use ring PMD to have a special >>>>>> case for >>>>>> handling ring interfaces. If the ring PMD would advertise >>>>>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be >>>>>> oblivious >>>>>> to the type of underlying interface. >>>>>> >>>>> >>>>> This is not handling the special case for the ring PMD, this is why >>>>> he have the >>>>> offload capability flag. Application should behave according >>>>> capability flags, >>>>> not per specific PMD. >>>>> >>>>> Is there any specific usecase you are trying to cover? >> >
[dpdk-dev] [PATCH v2] net/ring: advertise multi segment TX and scatter RX.
Even though ring interfaces don't support any other TX/RX offloads they do support sending multi segment packets and this should be advertised in order to not break applications that use ring interfaces. Also advertise scatter RX support. Signed-off-by: Dumitru Ceara --- drivers/net/ring/rte_eth_ring.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c index 40fe1ca..ac1ce1d 100644 --- a/drivers/net/ring/rte_eth_ring.c +++ b/drivers/net/ring/rte_eth_ring.c @@ -160,6 +160,8 @@ struct pmd_internals { dev_info->max_mac_addrs = 1; dev_info->max_rx_pktlen = (uint32_t)-1; dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues; + dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER; + dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS; dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues; dev_info->min_rx_bufsize = 0; -- 1.8.3.1
[dpdk-dev] [PATCH] net/ring: advertise multi segment support.
Even though ring interfaces don't support any other TX/RX offloads they do support sending multi segment packets and this should be advertised in order to not break applications that use ring interfaces. Signed-off-by: Dumitru Ceara --- drivers/net/ring/rte_eth_ring.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c index 733c898..59d1e67 100644 --- a/drivers/net/ring/rte_eth_ring.c +++ b/drivers/net/ring/rte_eth_ring.c @@ -160,6 +160,7 @@ struct pmd_internals { dev_info->max_mac_addrs = 1; dev_info->max_rx_pktlen = (uint32_t)-1; dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues; + dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS; dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues; dev_info->min_rx_bufsize = 0; -- 1.8.3.1