Re: [vpp-dev] Using TrafficGen with rdma driver

2023-01-06 Thread rtox
Hi Ben, I do not intend to use SRIOV VFs. Is it possible to use rdma-plugin directly with the PF NICs ? With my shared config the NICs appear to come up into VPP ( show interface) without any warnings etc.  Btw. testing on vanilla VPP 22.10 , Ubuntu 20.04 Server Thanks -=-=-=-=-=-=-=-=-=-=-=-

[vpp-dev] Using TrafficGen with rdma driver

2023-01-06 Thread rtox
Hey VPP community, anyone out there using TRex setup with rdma driver? The setup works just fine on legacy DPDK drivers, as documented here https://fd.io/docs/vpp/v2101/usecases/simpleperf/trex.html Once switching over to rdma ( as advised for Mellanox cards) I get the TRex warning that "Failed

Re: [vpp-dev] Slow VPP performance vs. DPDK l2fwd / l3wfd

2023-01-03 Thread rtox
Hi Matt, thanks. The *no-multi-seq* option is actually dropping the performance even more. Once enable it drops from 5 Mpps ( out of expected 10 Mpps) to less than < 1 Mpps. Therefore I disabled the option again. The dpdk applications foward the full 10 Mpps without any dev-args: > > ./dpdk-l

Re: [vpp-dev] Slow VPP performance vs. DPDK l2fwd / l3wfd

2023-01-03 Thread rtox
Hi @Benoit, yes I can confirm NIC and VPP worker are on same node-0 . I am also using the same core-id for the benchmark comparison against plain dpdk l2fwd/l3fwd. -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#22407): https://lists.fd.io/g/vpp-de

Re: [vpp-dev] Slow VPP performance vs. DPDK l2fwd / l3wfd

2022-12-30 Thread rtox
``` ### L2fwd config based on MAC ### - version: 2 interfaces: ['4b:00.0', '4b:00.1'] port_info: - dest_mac: b8:ce:f6:dc:xx:xx src_mac:  b8:ce:f6:dc:xx:xx - dest_mac: b8:ce:f6:dc:xx:xx src_mac:  b8:ce:f6:dc:xx:xx platform: master_thread_id: 0 latency_thread_id: 1 dual_if: - socket: 0 threads: [2

Re: [vpp-dev] Slow VPP performance vs. DPDK l2fwd / l3wfd

2022-12-30 Thread rtox
Adding also the TRex config: ### Config file generated by dpdk_setup_ports.py ### - version: 2 interfaces: ['4b:00.0', '4b:00.1'] port_info: - dest_mac: b8:ce:f6:dc:e1:f0 src_mac: b8:ce:f6:dc:e1:e8 - dest_mac: b8:ce:f6:dc:e1:f1 src_mac: b8:ce:f6:dc:e1:e9 platform: master_thread_id: 0 latency_thr

[vpp-dev] Slow VPP performance vs. DPDK l2fwd / l3wfd

2022-12-30 Thread rtox
  Hi VPP team,   need to wrap my head around why VPP is not even able to process 10 Mpps one single-core setup.  Afaik VPP xconnect ( comparable to l2fwd) or L3-routing ( l3wfd) should yield 10 Mpps even back in 2017 (  slide 15 https://wiki.fd.io/images/3/31/Benchmarking-sw-data-planes-Dec5_