[dpdk-dev] dpdk shared lib build
Hi, Building dpdk app. as a shared library (.so) I believe is supported but I can't find any documentation for it. I tried doing the same Makefile format as building an app, and that is placed the following: rte.vars.mk -> before everything rte.extshared.mk ---> after everything At first make, spits the following error: INSTALL-SHARED cp: missing destination file operand after ?/home/mydpdklib/src/build/lib? Try 'cp --help' for more information. make: *** [/home/linc/src/erlpmd.git/src/build/lib] Error 1 Second make, will do nothing, and lib does not contain any .so file. Additionally I added LIB = mylib.so Could somebody please tell me if I'm doing the right thing ? Thanks Pepe -- To stop learning is like to stop loving.
[dpdk-dev] Surprisingly high TCP ACK packets drop counter
Hi Alexander, Regarding your following statement -- " The only drop counter quickly increasing in the case of pure ACK flood is ierrors, while rx_nombuf remains zero. " Can you please explain the significance of "ierrors" counter since I am not familiar with that. Further, you said you have 4 queues, how many cores are you using for polling the queues ? Hopefully 4 cores for one queue each without locks. [It is absolutely critical that all 4 queues be polled] Further, is it possible so that your application itself reports the traffic receive in packets per second on each queue ? [Don't try to forward the traffic here, simply receive and drop in your app and sample the counters every second] Regards -Prashant -Original Message- From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Alexander Belyakov Sent: Friday, November 01, 2013 7:13 PM To: dev at dpdk.org Subject: [dpdk-dev] Surprisingly high TCP ACK packets drop counter Hello, we have simple test application on top of DPDK which sole purpose is to forward as much packets as possible. Generally we easily achieve 14.5Mpps with two 82599EB (one as input and one as output). The only suprising exception is forwarding pure TCP ACK flood when performace always drops to approximately 7Mpps. For simplicity consider two different types of traffic: 1) TCP SYN flood is forwarded at 14.5Mpps rate, 2) pure TCP ACK flood is forwarded only at 7Mpps rate. Both SYN and ACK packets have exactly the same length. It is worth to mention, this forwarding application looks at Ethernet and IP headers, but never deals with L4 headers. We tracked down issue to RX circuit. To be specific, there are 4 RX queues initialized on input port and rte_eth_stats_get() shows uniform packet distribution (q_ipackets) among them, while q_errors remain zero for all queues. The only drop counter quickly increasing in the case of pure ACK flood is ierrors, while rx_nombuf remains zero. We tried different kinds of traffic generators, but always got the same result: 7Mpps (instead of expected 14Mpps) for TCP packets with ACK flag bit set while all other flag bits dropped. Source IPs and ports are selected randomly. Please let us know if anyone is aware of such strange behavior and where should we look at to narrow down the problem. Thanks in advance, Alexander Belyakov === Please refer to http://www.aricent.com/legal/email_disclaimer.html for important disclosures regarding this electronic communication. ===
[dpdk-dev] Surprisingly high TCP ACK packets drop counter
Hi, I have used DPDK1.4 and DPDK1.5 and the packets do fan out nicely on the rx queues nicely in some usecases I have. Alexander, can you please try using DPDK1.4 or 1.5 and share the results. Regards -Prashant -Original Message- From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Wang, Shawn Sent: Friday, November 01, 2013 8:24 PM To: Alexander Belyakov Cc: dev at dpdk.org Subject: Re: [dpdk-dev] Surprisingly high TCP ACK packets drop counter Hi: We had the same problem before. It turned out that RSC (receive side coalescing) is enabled by default in DPDK. So we write this na?ve patch to disable it. This patch is based on DPDK 1.3. Not sure 1.5 has changed it or not. After this patch, ACK rate should go back to 14.5Mpps. For details, you can refer to Intel? 82599 10 GbE Controller Datasheet. (7.11 Receive Side Coalescing). From: xingbow Date: Wed, 21 Aug 2013 11:35:23 -0700 Subject: [PATCH] Disable RSC in ixgbe_dev_rx_init function in file ixgbe_rxtx.c --- DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h | 2 +- DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 +++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h b/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h index 7fffd60..f03046f 100644 --- a/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h +++ b/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h @@ -1930,7 +1930,7 @@ enum { #define IXGBE_RFCTL_ISCSI_DIS 0x0001 #define IXGBE_RFCTL_ISCSI_DWC_MASK 0x003E #define IXGBE_RFCTL_ISCSI_DWC_SHIFT1 -#define IXGBE_RFCTL_RSC_DIS0x0010 +#define IXGBE_RFCTL_RSC_DIS0x0020 #define IXGBE_RFCTL_NFSW_DIS 0x0040 #define IXGBE_RFCTL_NFSR_DIS 0x0080 #define IXGBE_RFCTL_NFS_VER_MASK 0x0300 diff --git a/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c index 07830b7..ba6e05d 100755 --- a/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c +++ b/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c @@ -3007,6 +3007,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) uint64_t bus_addr; uint32_t rxctrl; uint32_t fctrl; + uint32_t rfctl; uint32_t hlreg0; uint32_t maxfrs; uint32_t srrctl; @@ -3033,6 +3034,12 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) fctrl |= IXGBE_FCTRL_PMCF; IXGBE_WRITE_REG(hw, IXGBE_FCTRL, fctrl); + /* Disable RSC */ + RTE_LOG(INFO, PMD, "Disable RSC\n"); + rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL); + rfctl |= IXGBE_RFCTL_RSC_DIS; + IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl); + /* * Configure CRC stripping, if any. */ -- Thanks. Wang, Xingbo On 11/1/13 6:43 AM, "Alexander Belyakov" wrote: >Hello, > >we have simple test application on top of DPDK which sole purpose is to >forward as much packets as possible. Generally we easily achieve >14.5Mpps with two 82599EB (one as input and one as output). The only >suprising exception is forwarding pure TCP ACK flood when performace >always drops to approximately 7Mpps. > >For simplicity consider two different types of traffic: >1) TCP SYN flood is forwarded at 14.5Mpps rate, >2) pure TCP ACK flood is forwarded only at 7Mpps rate. > >Both SYN and ACK packets have exactly the same length. > >It is worth to mention, this forwarding application looks at Ethernet >and IP headers, but never deals with L4 headers. > >We tracked down issue to RX circuit. To be specific, there are 4 RX >queues initialized on input port and rte_eth_stats_get() shows uniform >packet distribution (q_ipackets) among them, while q_errors remain zero >for all queues. The only drop counter quickly increasing in the case of >pure ACK flood is ierrors, while rx_nombuf remains zero. > >We tried different kinds of traffic generators, but always got the same >result: 7Mpps (instead of expected 14Mpps) for TCP packets with ACK >flag bit set while all other flag bits dropped. Source IPs and ports >are selected randomly. > >Please let us know if anyone is aware of such strange behavior and >where should we look at to narrow down the problem. > >Thanks in advance, >Alexander Belyakov === Please refer to http://www.aricent.com/legal/email_disclaimer.html for important disclosures regarding this electronic communication. ===
[dpdk-dev] dpdk shared lib build
Hi, 02/11/2013 07:24, Jose Gavine Cueto : > Building dpdk app. as a shared library (.so) I believe is supported but I > can't find any documentation for it. I tried doing the same Makefile > format as building an app, and that is placed the following: > > rte.vars.mk -> before everything > rte.extshared.mk ---> after everything > > At first make, spits the following error: > > INSTALL-SHARED > cp: missing destination file operand after ?/home/mydpdklib/src/build/lib? > Try 'cp --help' for more information. > make: *** [/home/linc/src/erlpmd.git/src/build/lib] Error 1 > > Second make, will do nothing, and lib does not contain any .so file. > Additionally I added LIB = mylib.so I think you should define SHARED variable instead of LIB. Please confirm it works. By the way, it is the old way of doing shared library. It should be deprecated and replaced by the new option RTE_BUILD_SHARED_LIB. Could you also try the new method by using rte.extlib.mk with option CONFIG_RTE_BUILD_SHARED_LIB=y ? Thank you -- Thomas
[dpdk-dev] Debugging igbvf_pmd
Hi, We are developing an App over DPDK and in one scenario with SR-IOV with one of the VFs mapped to a VM and DPDK running on the VM, we see that the packets are not coming on the wire but I get the following debug logs for every packet transmitted. We are getting the same format of packets on the wire in a different scenario so IMO the Virtual Function ports are set up properly. Any idea how this can be debugged further. The NIC card we are using is PMD: eth_igb_xmit_pkts(): port_id=3 queue_id=0 pktlen=60 tx_first=14 tx_last=14 PMD: eth_igb_xmit_pkts(): port_id=3 queue_id=0 tx_tail=15 nb_tx=1 Regards, Sambath
[dpdk-dev] Debugging igbvf_pmd
Sorry pressed the send button too soon. The NIC Card we are using is Intel Corporation 82576 Virtual Function (rev 01) Do we need to do now NIC/CPU low level debugging or is there some issue in the sw that could cause the packet to be dropped below this log message. PMD: eth_igb_xmit_pkts(): port_id=3 queue_id=0 pktlen=60 tx_first=14 tx_last=14 PMD: eth_igb_xmit_pkts(): port_id=3 queue_id=0 tx_tail=15 nb_tx=1 Thanks, Sambath On Sat, Nov 2, 2013 at 3:54 PM, Sambath Kumar Balasubramanian < sambath.balasubramanian at gmail.com> wrote: > Hi, > > We are developing an App over DPDK and in one scenario with SR-IOV with > one of the VFs mapped to a VM and DPDK running on the VM, we see that the > packets are not coming on the wire but I get the following debug logs for > every packet transmitted. We are getting the same format of packets on the > wire in a different scenario so IMO the Virtual Function ports are set up > properly. Any idea how this can be debugged further. The NIC card we are > using is > > > > PMD: eth_igb_xmit_pkts(): port_id=3 queue_id=0 pktlen=60 tx_first=14 > tx_last=14 > > PMD: eth_igb_xmit_pkts(): port_id=3 queue_id=0 tx_tail=15 nb_tx=1 > > Regards, > Sambath > >