Hello Keith and thank you for your answer,

The goal is indeed to generate as much traffic per machine as possible (we use pktgen-dpdk to benchmark datacenter routers before putting them on production).

For this we use all available CPU power to send packets.

Following your suggestion, I modified my command to:

./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m 1.0 -s 0:pcap/8500Bpp.pcap

The issue is still reproduced, though will slightly lower performance (reaching linerate at 8500 Bpp does not require much processing power)

#sh int et 29/1 | i rate
  5 seconds input rate 36.2 Gbps (90.5% with framing overhead), 0 packets/sec
  5 seconds output rate 56 bps (0.0% with framing overhead), 0 packets/sec

Regards,

PS: Sorry for answering you directly, sending this message a second time on ML


On 2017-09-25 09:46 PM, Wiles, Keith wrote:
On Sep 25, 2017, at 6:19 PM, Damien Clabaut <damien.clab...@corp.ovh.com> wrote:

Hello DPDK devs,

I am sending this message here as I did not find a bugtracker on the website.

If this is the wrong place, I would kindly apologize and ask you to redirect me 
to the proper place,

Thank you.

Description of the issue:

I am using pktgen-dpdk to replay a pcap file containing exactly 1 packet.

The packet in question is generated using this Scapy command:

pkt=(Ether(src="ec:0d:9a:37:d1:ab",dst="7c:fe:90:31:0d:52")/Dot1Q(vlan=2)/IP(dst="192.168.0.254")/UDP(sport=1020,dport=1021)/Raw(RandBin(size=8500)))

The pcap is then replayed in pktgen-dpdk:

./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m [1-7].0 -s 
0:pcap/8500Bpp.pcap
This could be the issue as I can not setup your system (no cards). The pktgen 
command line is using 1-7 cores for TX/RX of packets. This means to pktgen to 
send the pcap from each core and this means the packet will be sent from each 
core. If you set the number of TX/RX cores to 1.0 then you should only see one. 
I assume you are using 1-7 cores to increase the bit rate closer to the 
performance of the card.

When I run this on a machine with Mellanox ConnectX-4 NIC (MCX4), the switch 
towards which I generate traffic gets a strange behaviour

#sh int et29/1 | i rate
   5 seconds input rate 39.4 Gbps (98.4% with framing overhead), 0 packets/sec

A capture of this traffic (I used a monitor session to redirect all to a 
different port, connected to a machine on which I ran tcpdump) gives me this:

19:04:50.210792 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui 
Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
19:04:50.210795 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui 
Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
19:04:50.210796 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui 
Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
19:04:50.210797 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui 
Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
19:04:50.210799 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui 
Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500

The issue cannot be reproduced if any of the following conditions is met:

- Set the size in the Raw(RandBin()) to a value lower than 1500

- Send the packet from a Mellanox ConnectX-3 (MCX3) NIC (both machines are 
indentical in terms of software).

Is this a known problem ?

I remain available for any question you may have.

Regards,

--
Damien Clabaut
R&D vRouter
ovh.qc.ca

Regards,
Keith


--
Damien Clabaut
R&D vRouter
ovh.qc.ca

Reply via email to