Hi,
I?ve been using DPDK pktgen 2.8.0 (built against DPDK 1.8.0 libraries) to send
traffic on a server using an Intel 82599 (X520-2). Traffic gets sent out port 1
through another server which also an Intel 82599 installed and is forwarded
back into port 0. When I send using a single source and destination IP address,
this works fine and packets arrive on port 0 at close to the maximum line rate.
If I change port 1 to range mode and send traffic from a range of source IP
addresses to a single destination IP address, for a second or two the display
indicates that some packets were received on port 0 but then the rate of
received packets on the display goes to 0 and all incoming packets on port 0
are registered as rx errors.
The server that traffic is being forwarded through is running the ip_pipeline
example app. I ruled this out as the source of the problem by sending directly
from port 1 to port 0 of the pktgen box. The issue still occurs when the
traffic is not being forwarded through the other box. Since ip_pipeline is able
to receive the packets and forward them without getting rx errors and it?s
running with the same model of NIC as pktgen is using, I checked to see if
there were any differences in initialization of the rx port between ip_pipeline
and pktgen. I noticed that pktgen has a setting that ip_pipeline doesn't:
const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
If I comment out the .mq_mode setting and rebuild pktgen, the problem no longer
occurs and I now receive packets on port 0 at near line rate when testing from
a range of source addresses.
I recall reading in the past that if a receive queue fills up on an 82599 ,
that receiving stalls for all of the other queues and no more packets can be
received. Could that be happening with pktgen? Is there any debugging I can do
to help track it down?
The command line I have been launching pktgen with is:
pktgen -c f -n 3 -m 512 -- -p 0x3 -P -m 1.0,2.1
Thanks,
-Matt Smith