Which vpp version are you using in your testing? As of VPP 22.06, linux-cp and linux-nl plugins have been supported and the binary builds are available at FD.io repository ( https://s3-docs.fd.io/vpp/22.10/gettingstarted/installing/ubuntu.html).
Can you install vpp from the FD.io repo and try again? (BTW, you might want to disable the ping plugin if linux-cp is used) I would also suggest you add static routes to rule out any issue with FRR (in which case you don't actually need linux-cp plugin). In the meanwhile, I wonder what uio driver you are using for your VPP machine (igb_uio, uio_pci_generic, or vfio-pci). Assuming you are running virtio-net driver on the guest machine, and you are connecting the M1 and R1, R1 and M2 with linux kernel bridge. If you still run into any issue, you may want to check the neighbor table and routing table in the VPP system first, and maybe the interface counter as well. Regards, Xiaodong On Sat, Oct 1, 2022 at 3:55 AM Bu Wentian <buwent...@outlook.com> wrote: > Hi everyone, > I am a beginner of VPP, and I'm trying to use VPP+FRR on KVM VMs as > routers. I have installed VPP and FRR on Ubuntu 20.04.5 VMs, and made them > running in a seperated network namespace. I use VPP Linux-cp plugin to > synchronize the route from kernel stack to VPP. The VPP and FRR seems to > work, but when I use iperf3 to test the throughput, I find the performance > of VPP is not good. > > I created a very simple topology to test the throughput: > M1 ----- R1(with VPP) ----- M2 > M1, M2 are also Ubuntu VMs(without VPP), in different subnets. I ran > iperf3 server on M1 and client on M2, but only got about 2.1Gbps > throughput, which is significantly worse than using Linux kernel as a > router(about 26.1Gbps). > > I made another experiment on the topology: > M1 ------ R1(with VPP) ---- R2(with VPP) ------ M2 > The iperf3 result is even worse (only 1.6Gbps). > > I also noticed that many retransmissions happend during the iperf3 test. > If I use Linux kernel as router rather than VPP, no retransmission will > happen. > Part of iperf3 output: > [ ID] Interval Transfer Bitrate Retr Cwnd > [ 5] 0.00-1.00 sec 166 MBytes 1.39 Gbits/sec 23 344 KBytes > [ 5] 1.00-2.00 sec 179 MBytes 1.50 Gbits/sec 49 328 KBytes > [ 5] 2.00-3.00 sec 203 MBytes 1.70 Gbits/sec 47 352 KBytes > [ 5] 3.00-4.00 sec 203 MBytes 1.70 Gbits/sec 54 339 KBytes > [ 5] 4.00-5.00 sec 211 MBytes 1.77 Gbits/sec 59 325 KBytes > > > Another phenomenon I found is that when I ran iperf3 directly on the R1 > and R2, I got 0 throughput at all. The output of iperf3 is like this: > [ ID] Interval Transfer Bitrate Retr Cwnd > [ 5] 0.00-1.00 sec 324 KBytes 2.65 Mbits/sec 4 8.74 KBytes > [ 5] 1.00-2.00 sec 0.00 Bytes 0.00 bits/sec 1 8.74 KBytes > [ 5] 2.00-3.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes > [ 5] 3.00-4.00 sec 0.00 Bytes 0.00 bits/sec 1 8.74 KBytes > [ 5] 4.00-5.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes > [ 5] 5.00-6.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes > [ 5] 6.00-7.00 sec 0.00 Bytes 0.00 bits/sec 1 8.74 KBytes > [ 5] 7.00-8.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes > [ 5] 8.00-9.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes > [ 5] 9.00-10.00 sec 0.00 Bytes 0.00 bits/sec 0 8.74 KBytes > - - - - - - - - - - - - - - - - - - - - - - - - - > [ ID] Interval Transfer Bitrate Retr > [ 5] 0.00-10.00 sec 324 KBytes 266 Kbits/sec 7 > sender > [ 5] 0.00-10.00 sec 0.00 Bytes 0.00 bits/sec > receiver > > > All my VMs use 4vcpus and 8G RAM. The host machine has 16Cores(32 Threads) > and 32GB RAM. > The VMs are connected by libvirtd networks. > I installed the VPP +FRR following this tutorial: > https://ipng.ch/s/articles/2021/12/23/vpp-playground.html > The VPP startup.conf is in the attachment. > > I want to know why the VPP throughput is worse than Linux kernel, and what > can I do to improve it (I hope it better than Linux kernel forwarding). I > have searched on google for the solution but got nothing helpful. It will > be appreciated if anyone could give me a help. Please contact me if more > information or logs are needed. > > > Sincerely, > Wentian Bu > > > > >
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21954): https://lists.fd.io/g/vpp-dev/message/21954 Mute This Topic: https://lists.fd.io/mt/94049228/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-