Hi Sebastiano, You’re running a debug image, so that is expected. Try to run a release image.
Regarding the proxy issue, it looks like the proxy did not close/reuse the fifos accordingly. Will try to look into it. Regards, Florin > On Jul 22, 2020, at 2:23 AM, Sebastiano Miano <mianosebasti...@gmail.com> > wrote: > > Hi Florin, > thanks for your reply. > Unfortunately, changing the "fifo size" to "4m" has not changed the > performance that much. I've only got 2Gbps instead of 1.5Gbps. > Moreover, I have checked both the "show errors" output and it looks like no > errors are shown [1]. > The "show run" output looks fine, which makes me think that VPP is actually > loaded and running. > > What is weird is that, sometimes, after a couple of runs, the application > crashes with the following error: > "vpp/src/svm/svm_fifo.c:410 (svm_fifo_init_ooo_lookup) assertion > `!rb_tree_is_init (&f->ooo_deq_lookup)' fails" > > What is causing this error? > > Thanks again, > Sebastiano > > [1] https://gist.github.com/sebymiano/8bc582bc6491cc88f5a608d4a83b25e9 > <https://gist.github.com/sebymiano/8bc582bc6491cc88f5a608d4a83b25e9> > Il giorno mar 21 lug 2020 alle ore 19:22 Florin Coras <fcoras.li...@gmail.com > <mailto:fcoras.li...@gmail.com>> ha scritto: > Hi Sebastiano, > > The test proxy application is just an example, so it’s far from optimized. > Nonetheless, last time I tested it was capable of saturating a 10Gbps nic. So > some things to consider while debugging: > - fifo size configuration. The wiki page does not set a fifo size and as a > result a small default is used. Try adding “fifo size 4096” to the “test > proxy” cli to force fifos of 4MB. > - check error counters with “show error” to see if tcp or the interfaces > exhibit other errors. > - see if vpp is actually loaded with “show run” (do a clear run to check only > recent data). > > Regards, > Florin > >> On Jul 21, 2020, at 8:44 AM, Sebastiano Miano <mianosebasti...@gmail.com >> <mailto:mianosebasti...@gmail.com>> wrote: >> >> Dear all, >> I was trying to test the performance of the VPP Host Stack, compared to the >> one of the Linux kernel TCP/IP stack. >> In particular, I was testing the TestProxy application [1] and compare it >> with the simpleproxy application available at this URL [2]. >> >> My setup is composed of a server1, which runs the VPP test proxy application >> attached to two physical interfaces (dual-port Intel XL710 40GbE NIC). The >> application listens to TCP traffic on a given IP1:port1 (interface 1) and >> forwards it to a "backend" server listening on another IP2:port2 (interface >> 2). >> Another server2 (same NIC) is running both an iperf3 client, which sends >> traffic to the proxy at IP1:port1 and an iperf3 server, which receives >> traffic on IP2:port2. Both iperf3 client and servers are running in two >> different netns on server2 [3]. >> The VPP startup configuration that I used is the following [4]. >> >> The configuration works fine, however, these are the results that I got: >> VPP Test Proxy (VPP) - 1.50 Gbits/sec >> Simpleproxy (Linux kernel) - 9.39 Gbits/sec >> >> I am wondering if there is something wrong with my setup/configuration, I am >> quite new to VPP. >> >> Thank you in advance. >> Sebastiano >> >> [1] https://wiki.fd.io/view/VPP/HostStack/TestProxy >> <https://wiki.fd.io/view/VPP/HostStack/TestProxy> >> [2] https://github.com/vzaliva/simpleproxy >> <https://github.com/vzaliva/simpleproxy> >> [3] https://gist.github.com/sebymiano/0ff74ed0ec2805591fca7cd688b805d9 >> <https://gist.github.com/sebymiano/0ff74ed0ec2805591fca7cd688b805d9> >> [4] https://gist.github.com/sebymiano/f75c0f4d506fbf722cb2358b1deaa250 >> <https://gist.github.com/sebymiano/f75c0f4d506fbf722cb2358b1deaa250> >> >
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#17039): https://lists.fd.io/g/vpp-dev/message/17039 Mute This Topic: https://lists.fd.io/mt/75706142/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-