Hi,

I am doing experiments about packet classification algorithm and I found I
always get RX Error when the throughput is too high, so I did the following
experiments.

There are two PC servers (A and B), each of them has a Intel 82599ES 10G
with two ports(1 and 2). And they are connected to each other. I simply run
pktgen on both server.

When I start one of the server and let it generate 10Gb/s traffic each
port, which is start A1 and A2. I can receive the 10Gb/s traffic on the
other server's every port.

When I start one port of the two servers and let them generate 10Gb/s
traffic to each other, which is start A1 and B1. Both the port show that
they can receive 10Gb/s and send 10Gb/s traffic.

But When I start both port on both server, which is start A1, A2, B1, B2,
each port shows it can generate 10Gb/s but it can only receive 6.7Gb/s
traffic and the left 3.3Gb/s are considered RX Error.

When I stop one of the ports, which is start A1, B1, B2, On server A I
receive 8.1Gb/s on A1, 8.4Gb/s on A2, while A1 is sending 10Gb/s traffic,
the traffic left is considered RX Error. I receive 10Gb/s traffic on B1,
sending 10Gb/s traffic on B1 and B2.

My parameter is -c 0xff -n 4 -- -P -m "[1:2].0, [3:4].1", but it won't
change when I assign more lcore on RX queue.

I'm thinking it maybe is the parameter problem, such as the Hugepage or
others, is there any solution or advices?

Reply via email to