What arguments did you pass to l2fwd and l3fwd when you started them up?

Your NIC statistics show RX misses due to lack of available buffers....

HundredGigabitEthernet4b/0/0       1     up   HundredGigabitEthernet4b/0/0
[...]
      rx_out_of_buffer                                 86941
HundredGigabitEthernet4b/0/1       2     up   HundredGigabitEthernet4b/0/1
[...]
      rx_out_of_buffer                                155395

I don't know if those errors are enough to cause the difference in your
throughput measurements between l2fwd/l3fwd and VPP, but it would probably
be worthwhile to increase buffers-per-numa and repeat your tests.

I have heard advice in the past that performance with Mellanox DPDK PMDs
can be improved by setting the no-multi-seg option. I don't know whether
that is still true and I never compared performance with that option set vs
without, so I'm not sure how much it would help. But it may be worth trying.

Thanks,
-Matt


On Tue, Jan 3, 2023 at 8:21 AM <r...@gmx.net> wrote:

> Hi @Benoit, yes I can confirm NIC and VPP worker are on same node-0 . I am
> also using the same core-id for the benchmark comparison against plain dpdk
> l2fwd/l3fwd.
> 
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22408): https://lists.fd.io/g/vpp-dev/message/22408
Mute This Topic: https://lists.fd.io/mt/95959719/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to