Hello, recently I have found a case of significant performance degradation for our application (built on top of DPDK, of course). Surprisingly, similar issue is easily reproduced with default testpmd.
To show the case we need simple IPv4 UDP flood with variable UDP payload size. Saying "packet length" below I mean: Eth header length (14 bytes) + IPv4 header length (20 bytes) + UPD header length (8 bytes) + UDP payload length (variable) + CRC (4 bytes). Source IP addresses and ports are selected randomly for each packet. I have used DPDK with revisions 1.6.0r2 and 1.7.1. Both show the same issue. Follow "Quick start" guide (http://dpdk.org/doc/quick-start) to build and run testpmd. Enable testpmd forwarding ("start" command). Table below shows measured forwarding performance depending on packet length: No. -- UDP payload length (bytes) -- Packet length (bytes) -- Forwarding performance (Mpps) -- Expected theoretical performance (Mpps) 1. 0 -- 64 -- 14.8 -- 14.88 2. 34 -- 80 -- 12.4 -- 12.5 3. 35 -- 81 -- 6.2 -- 12.38 (!) 4. 40 -- 86 -- 6.6 -- 11.79 5. 49 -- 95 -- 7.6 -- 10.87 6. 50 -- 96 -- 10.7 -- 10.78 (!) 7. 60 -- 106 -- 9.4 -- 9.92 At line number 3 we have added 1 byte of UDP payload (comparing to previous line) and got forwarding performance halved! 6.2 Mpps against 12.38 Mpps of expected theoretical maximum for this packet size. That is the issue. Significant performance degradation exists up to 50 bytes of UDP payload (96 bytes packet length), where it jumps back to theoretical maximum. What is happening between 80 and 96 bytes packet length? This issue is stable and 100% reproducible. At this point I am not sure if it is DPDK or NIC issue. These tests have been performed on Intel(R) Eth Svr Bypass Adapter X520-LR2 (X520LR2BP). Is anyone aware of such strange behavior? Regards, Alexander Belyakov