Hello Ping & VSAP folks,
Quite impressive speed up for Nginx!
Jerome
De : au nom de "Yu, Ping"
Date : lundi 1 juin 2020 à 11:10
À : "vpp-dev@lists.fd.io" , "csit-...@lists.fd.io"
Cc : "Yu, Ping" , "Yu, Ping"
Objet : [vpp-dev] Welcome to VSAP project
Hello, all
Glad to announce that the proj
Hi Filip,
Thanks for your answer. I'm glad to hear them.
I understand the difficulties with ikev2_initiate_sa_init return value. And i
don't think there is a feasible solution for it because of dependencies on an
outside VPP source. Maybe events are the best choice.
Regards,
Mahdi
-=-=-=-=-=-=-
Hi vpp-dev,
I've been contemplating trying to use native drivers in place of DPDK with the
understanding that I may be paying a ~20% penalty by using DPDK. So I went to
try things out, but had some trouble. The systems in paticular I'm interested
in have 10GE intel NICs in them which I believe
Hi Chris,
About mlx5, we are using mlx5 cards with the VPP rdma plugin and it is
working fine for us, for VPP 19.08 and newer.
(I think there may be a problem with the rdma plugin for larger MTU
values but for MTU < 2000 or so, everything works fine.)
/ Elias
On Tue, 2020-06-02 at 03:40 -0400,
Hi Paul,
> Yes. The build assumes that vppapigen runs in the global python environment
> such as if in a fresh container.
>
> It is the same issue we have with generating vpp papi packages. It installs
> into the global environment and people have no option to pip install it
> without down
Hi Chris, Elias,
>> From: Christian Hopps:
>> I also have systems that have mlx5 (and eventually will have
>> connectx-6 cards). These cards appear to be supported by the rdma
>> native driver. I was able to create the interfaces and saw TX packets
>> but no RX. Is this driver considered st
Manoj, can you try VPP-20.05 instead of VPP-19.04? I suspect the DPDK-19.02
used with VPP-19.04 is too old for your platform and skip 2MB hugepages. DPDK
19.02 seems to support up to 3 different hugepages size and you appear to have
4.
ben
> -Original Message-
> From: Manoj Iyer
> Sen
Hi vpp/csit experts,
I have submitted one patch that tries to optimize l2 input node.
https://gerrit.fd.io/r/c/vpp/+/27191
Want to see if it can help enhance the perf or not. But don't how to trigger
the L2 BD perf test case on CSIT.
Who can help me about it ? Thanks a lot in advancement.
BR
Z
Hi Ben,
> > (I think there may be a problem with the rdma plugin for larger MTU
> > values but for MTU < 2000 or so, everything works fine.)
>
> It should work, jumbo support was added in the last months. Or do you
> refer to something else?
I think I mean something else, a problem that I notic
Hi ,
We are seeing a crash while doing add_trace for a vlib_buffer in our graph node.
#0 0x74ee0feb in raise () from /lib64/libc.so.6
#1 0x74ecb5c1 in abort () from /lib64/libc.so.6
#2 0x0040831c in os_panic () at
/fdio/src/fdio.1810/src/vpp/vnet/main.c:368
#3 0x7
Hi Christian,
ixgbe driver is deprecated, that code is very old, from days when VPP was not
open source.
That driver was used for Intel Niantic family (82599, x5x0) of NICs which are
those days replaced by Intel Fortville (x710, xl710, xvv710, x722).
For fortville and soon for columbiaville NI
Unless you fully communicate your configuration, you’ll have to debug the issue
yourself. Are you using the standard handoff mechanism, or a mechanism of your
own design?
The handoff demo plugin seems to work fine... See
../src/examples/handoffdemo/{README.md, node.c} etc.
DBGvpp# sh trace
--
Ok. Thanks for the info.
FWIW, apparently x520-2 is still somewhat common b/c that's what I've got from
new dell servers from mid-last year. Will try and get the newer cards in the
future.
Thanks,
Chris.
> On Jun 2, 2020, at 7:59 AM, Damjan Marion via lists.fd.io
> wrote:
>
>
> Hi Christia
Hello All,
In *19.08** VPP version* we are seeing a crash while accessing the
*load_balance_pool* in *load_balanc_get*() function. This is happening
after *enabling worker threads*.
As such the FIB programming is happening in the main thread and in one of
the worker threads we see this crash.
Als
+ csit-...@lists.fd.io
On 6/2/2020 6:10 AM, Zhiyong Yang wrote:
Hi vpp/csit experts,
I have submitted one patch that tries to optimize l2 input node.
https://gerrit.fd.io/r/c/vpp/+/27191
Want to see if it can help enhance the perf or not. But don’t how to
trigger the L2 BD perf test case o
The code manages to access a poisoned adjacency – 0x131313 fill pattern –
copying Neale for an opinion.
D.
From: vpp-dev@lists.fd.io On Behalf Of Rajith PR via
lists.fd.io
Sent: Tuesday, June 2, 2020 10:00 AM
To: vpp-dev
Subject: [vpp-dev] SEGMENTATION FAULT in load_balance_get()
Hello All,
csit-2n-clx-perftest
mrrAND1cAND64bANDnic_intel-xxv710ANDeth-l2bdbasemaclrnNOTdrv_avf
From: vpp-dev@lists.fd.io On Behalf Of Dave Wallace
Sent: Tuesday, June 2, 2020 10:07 PM
To: Yang, Zhiyong ; vpp-dev@lists.fd.io;
csit-...@lists.fd.io
Subject: Re: [vpp-dev] How to trigger the perf test?
+ csi
Ben,
I built the 20.5 VPP and latest stable DPDK and I dont have the issue I
reported.
$ sudo systemctl status vpp.service
? vpp.service - vector packet processing engine
Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: enab
led)
Active: active (running) since Tue 2
Hi Klement,
Really appreciate the detailed explanation! That makes sense and I could
see that behavior from my tests.
Last question: does "max translations per user" matter any more because the
concept of user doesn't exist with new NAT?
max translations: 400
max translations per user: 500
>
Hi Carlito,
For ED NAT it doesn’t, as ED NAT no longer has any “user” concept. The code for
different flavours of NAT needs to be split and polished anyway. Idea is to
have data/code/APIs separate where appropriate.
Thanks,
Klement
> On 2 Jun 2020, at 20:31, Carlito Nueno wrote:
>
> Hi Kleme
Hi,
We are using linux bridge to connect different interfaces owned by different
VPP instances.
When the bridge has no binding info about MAC-to-port, bridge is flooding
packets to all interfaces.
Hence VPP receives some packets whose MAC address is owned by some other VPP
instance.
We want to
Hi Klement,
Got it.
Sorry one more question :)
I did another test and I noticed that tcp transitory sessions increase
rapidly when I create new sessions from new internal ip address really fast
(without delay). for example:
tcp sessions are never stopped, so tcp transitory sessions should be 0 a
Hi Dave/Neal,
The adj_poison seems to be a filling pattern - - 0xfefe. Am I looking into
the right code or I have interpreted it incorrectly?
Thanks,
Rajith
On Tue, Jun 2, 2020 at 7:44 PM Dave Barach (dbarach)
wrote:
> The code manages to access a poisoned adjacency – 0x131313 fill pattern –
>
Testing with 30 ip addresses (users) opening around 300 sessions each.
When using vpp-20.01 + fixes by you and Filip (before the port overloading
patches), total sessions and total transitory sessions were much smaller
(around 15062).
on vpp-20.05 with port overloading
NAT44 pool addresses:
130.
24 matches
Mail list logo