Hello,
I opened a ticket with Mellanox in parallel.
We are trying to figure out why it doesn't work on MCX4 but it works on
MCX3, even though MCX3 is not officially supported and neither are jumbo
frames.
In case you want to check, the case ID is 00392710.
Regards,
On 2017-10-11 02:59 PM, Yongseok Koh wrote:
On Thu, Sep 28, 2017 at 08:44:26PM +0000, Wiles, Keith wrote:
On Sep 26, 2017, at 8:09 AM, Damien Clabaut <damien.clab...@corp.ovh.com> wrote:
Hello Keith and thank you for your answer,
The goal is indeed to generate as much traffic per machine as possible (we use
pktgen-dpdk to benchmark datacenter routers before putting them on production).
For this we use all available CPU power to send packets.
Following your suggestion, I modified my command to:
./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m 1.0 -s
0:pcap/8500Bpp.pcap
I just noticed you were sending 8500 byte frames and you have to modify Pktgen
to increase the size of the mbufs in the mempool. I only configure the mbufs
to 1518 byte buffers or really 2048 byte, but I only deal with 1518 max size.
The size can be changed, but I am not next to a machine right now.
Hi Damien,
Could you manage to resolve this issue? Keith mentioned pktgen doesn't support
jumbo frames w/o modifying code. Do you still have an issue with Mellanox NIC
and its PMDs? Please let me know.
Thanks
Yongseok
--
Damien Clabaut
R&D vRouter
ovh.qc.ca