By no means, happens all the time! Glad it was solved!
Regards,
Florin
> On Jul 22, 2020, at 11:09 AM, Sebastiano Miano
> wrote:
>
> Hi Florin,
> what a fool I am, you are right ;)
>
> Just for reference, with the release image, the throughput increases to
> 11.4Gbps.
>
> Thanks again for y
Hi Florin,
what a fool I am, you are right ;)
Just for reference, with the release image, the throughput increases to
11.4Gbps.
Thanks again for your support.
Regards,
Sebastiano
Il giorno mer 22 lug 2020 alle ore 18:27 Florin Coras <
fcoras.li...@gmail.com> ha scritto:
> Hi Sebastiano,
>
> Yo
Hi Sebastiano,
You’re running a debug image, so that is expected. Try to run a release image.
Regarding the proxy issue, it looks like the proxy did not close/reuse the
fifos accordingly. Will try to look into it.
Regards,
Florin
> On Jul 22, 2020, at 2:23 AM, Sebastiano Miano
> wrote:
>
Most likely, this particular error (!rb_tree_is_init) may stem from the
fact that proxy's active_open_connected_callback() is invoked multiple
times for the same connection. I'm not sure it's supposed to happen this
way. Also, there seem to be other SVM FIFO issues, too
On Wed, Jul 22, 2020 at 4:2
Hi,
this SVM FIFO error looks like a crash that is mentioned in the ticket
related to a TCP timer bug [1].
I do sometimes get this exact error, too, it just happens less frequently
than the other kinds of the crash.
It can probably be reproduced using my test repo [2] that I have mentioned
in anoth
Hi Florin,
thanks for your reply.
Unfortunately, changing the "fifo size" to "4m" has not changed the
performance that much. I've only got 2Gbps instead of 1.5Gbps.
Moreover, I have checked both the "show errors" output and it looks like no
errors are shown [1].
The "show run" output looks fine, wh
Hi,
It looks like your linux vm iperf3 client could be saturated, so not sure if
there’s more that could be done. You can control fifo size from vcl.conf
(rx-fifo-size and tx-fifo-size).
Regards,
Florin
> On Jul 21, 2020, at 10:44 AM, sadhanakesa...@gmail.com wrote:
>
> hi Team,
> i tried t
hi Team,
i tried to run this for evaluating
https://wiki.fd.io/view/VPP/HostStack/LDP/iperf
with a linux client (centos 7 vm) and ubuntu kernel 4.15 (vpp server where
iperf3 runs) with uio_pci_generic driver
i am also getting close to 900 Mbits/sec vs 800 Mbits/sec for a linux
server/client iper
Hi Sebastiano,
The test proxy application is just an example, so it’s far from optimized.
Nonetheless, last time I tested it was capable of saturating a 10Gbps nic. So
some things to consider while debugging:
- fifo size configuration. The wiki page does not set a fifo size and as a
result a s
Dear all,
I was trying to test the performance of the VPP Host Stack, compared to the
one of the Linux kernel TCP/IP stack.
In particular, I was testing the TestProxy application [1] and compare it
with the simpleproxy application available at this URL [2].
My setup is composed of a server1, which
10 matches
Mail list logo