[vpp-dev] LACP bonding with VPP and SR-IOV

2020-02-21 Thread Greg O'Rawe
Hi, Has anyone any experience of using LACP bonding with VPP in a bifurcated setup via SR-IOV? I'm using VPP 19.08. My environment uses Mellanox ConnectX-4 25Gbps NICs in a single bond which carries data and control plane traffic via separate VLANs. It is not possible to have control plane tra

Re: [vpp-dev] CLIB_VEC64

2020-02-21 Thread Ray Kinsella
I wasn't even aware it existed ... :-) On 21/02/2020 15:39, Dave Barach via Lists.Fd.Io wrote: > Folks, > >   > > Is anyone actually using 64-bit length vectors, controlled by #ifdef > CLIB_VEC64 in vppinfra? I tend to doubt it... > >   > > The original reason for this feature was to allow th

Re: [vpp-dev] Regarding SCTP support in VPP host stack

2020-02-21 Thread Florin Coras
Hi Guruprasad, SCTP plugin has not been maintained in a long time and in vpp 20.05 we’ve actually removed the code. Regards, Florin > On Feb 16, 2020, at 10:25 PM, Guru Prasad wrote: > > Hi, > > Could anyone please help me on below queries: > > i)VPP_ECHO client and server application sup

[vpp-dev] CLIB_VEC64

2020-02-21 Thread Dave Barach via Lists.Fd.Io
Folks, Is anyone actually using 64-bit length vectors, controlled by #ifdef CLIB_VEC64 in vppinfra? I tend to doubt it... The original reason for this feature was to allow the vpp main heap to exceed 4gb. Since we deprecated the memory allocator responsible for the 4gb heap size limitation, I

[vpp-dev] 128k sequential read / write (fio) performance with spdk+vpp is not as good as the one with "kernel TCP"

2020-02-21 Thread 권세준 via Lists . Fd . Io
  Hello, I'm working on  SPDK library + VPP, because some report said that VPP reduces the overhead of network. When I test with VPP (with mlx5 poll mode driver) and null device with spdk, 4k performance with VPP is much better than the default(kTCP). But, 128k write performance with VP

[vpp-dev] Regarding SCTP support in VPP host stack

2020-02-21 Thread Guru Prasad
Hi, Could anyone please help me on below queries: i)VPP_ECHO client and server application supports testing of SCTP stack? ii) How stable SCTP stack in vpp1908. iii) Is VPP SCTP stack RFC compliant? iv) What is the current performance numbers with VPP SCTP stack. Thanks in Advance, Guruprasad T

Re: [vpp-dev] [csit-dev] FDIO Maintenance - 2020-02-20 1900 UTC to 2400 UTC

2020-02-21 Thread Nana Adjei
Hello, Sorry for the mix up . I believe that was for slot 9. I will proceed to site for the replacement. *Best Regards,* [image: Logo] [image: Logo] *Nana Adjei* T: 514-360-1131 E: nad...@vexxhost.com | www.vexxhost.com 650

[vpp-dev] RFC: FD.io Summit (Userspace), September, Bordeaux France

2020-02-21 Thread Ray Kinsella
Hi folks, A 2020 FD.io event is something that has been discussed a number of times recently at the FD.io TSC. With the possibility of co-locating such an event with DPDK Userspace, in Bordeaux, in September. Clearly, we are incredibly eager to make sure that such an event would be a success.

Re: [vpp-dev] Supported kernel versions

2020-02-21 Thread Dave Barach via Lists.Fd.Io
https://bugzilla.kernel.org/show_bug.cgi?id=206133 From: vpp-dev@lists.fd.io On Behalf Of Damjan Marion via Lists.Fd.Io Sent: Friday, February 21, 2020 4:29 AM To: Kevin Meyer Cc: vpp-dev@lists.fd.io Subject: Re: [vpp-dev] Supported kernel versions > On 21 Feb 2020, at 01:45, Kevin Meyer > m

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-21 Thread Damjan Marion via Lists.Fd.Io
> On 21 Feb 2020, at 11:48, chetan bhasin wrote: > > Thanks a lot Damjan for quick response ! > > We will try latest stable/1908 that has the given patch. > > With Mellanox Technologies MT27710 Family [ConnectX-4 Lx] : > 1) stable/vpp1908 : If we configure buffers (250k) and have 2048 huge p

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-21 Thread chetan bhasin
Thanks a lot Damjan for quick response ! We will try latest stable/1908 that has the given patch. *With Mellanox Technologies MT27710 Family [ConnectX-4 Lx] :* 1) stable/vpp1908 : If we configure buffers (250k) and have 2048 huge pages of 2MB (total 4GB), we see issue with traffic. "l3 mac misma

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-21 Thread Damjan Marion via Lists.Fd.Io
> On 21 Feb 2020, at 10:31, chetan bhasin wrote: > > Hi Nitin,Damjan, > > For 40G XL710 buffers : 537600 (500K+) > 1) vpp 19.08 (sept 2019) : it worked with vpp 19.08 (sept release) after > removing intel_iommu=on from Grub params. > 2) stable/vpp2001(latest) : It worked even we have "intel

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-21 Thread chetan bhasin
Hi Nitin,Damjan, For 40G *XL710* buffers : 537600 (500K+) 1) vpp 19.08 (sept 2019) : it worked with vpp 19.08 (sept release) after removing intel_iommu=on from Grub params. 2) stable/vpp2001(latest) : It worked even we have "intel_iommu=on" in Grub params On stable/vpp2001 , I found a check-in

Re: [vpp-dev] Supported kernel versions

2020-02-21 Thread Damjan Marion via Lists.Fd.Io
> On 21 Feb 2020, at 01:45, Kevin Meyer wrote: > > Hi, > The compatibility table here says VPP 20.02 should work with Ubuntu 18.04: > https://wiki.fd.io/view/VPP_-_Working_Environments. > > We are attempting to compile VPP 20.02 using Ubuntu 18.04.4 with Linux kernel > 5.3.0-40. When we compile