Hi, I've seen an odd behaviour in my test setup, which affected my test results, so I set up a much simpler scenario. I'm using netmap pktgen as a packet source, it creates a steady 14.2 Mpps of 64 byte UDP packets over one port of a 82599ES dual port card. This traffic then goes to an another similar machine with the same dual port NIC, where it get forwarded out on the other port. The packet sink runs on the same machine as the generator, it's also netmap pktgen, and it tells me there is a big fluctuation of throughput between 13 and 14 Mpps, the average comes out around 13.4 Mpps. After I've stripped down my test app to nothing but calling rx and tx functions in a loop (it doesn't even modifies the MAC address as DPDK l2fwd does), I've started to check what rte_eth_tx_burst() tells us. I've added it's return value to a counter, and a separate thread printed it out every second (sleep(1)), and I've found it reports a steady 14.05 Mpps output. I've checked with rte_eth_stats_get(), it gives me the same numbers, and no indication of any failure. When I've connected the generator to the sink directly, it was able to receive all the packets, so it's not that the sink is not able to count them all. I've even replaced the cables with each other to see if the one towards the sink drops some packets, but nothing changed. I had the impression that once rte_eth_tx_burst() managed to place the packets on the descriptor ring, it will go out in some finite time, or if the card itself drops it, it will appear in the stats at least, but the oerrors and q_errors values are always 0. Does anyone has an idea where could those packets (avg 0.6 Mpps) get dropped?
Regards, Zoltan Kiss