Hi All ,
Gentle Remainder. If possible kindly request to please suggest if there is
someway to include the odp package files with VPP same way as dpdk. I am
using vpp_lite as platform to build vpp with odp.
I thought I can have an external script to build the odp package and copy
the library fil
Karl, Thanks for all answers and clarifications!
~23Mpps that’s indeed your NIC - we’re observing same in CSIT.
One more point re host setup - I read it’s CPU power management disabled,
CPU TurboBoost disabled, and all memory channels populated?
I looks so, but wanted to recheck as not listed on yo
Dear Sreejith,
I don't think anybody here is familiar enough with ODP to make suggestions.
Can you come up with some proposal ? Possibly reasonably detailed writeup on
the mailing list?
Thanks,
Damjan
> On 16 Feb 2017, at 11:16, Sreejith Surendran Nair
> wrote:
>
> Hi All ,
>
> Gentle
Hi Damjan,
Thanks a lot for the kind reply. Sure I will try to share more detailed
note on ODP and discuss.
Thanks & Regards,
Sreejith
On 16 February 2017 at 18:29, Damjan Marion wrote:
>
> Dear Sreejith,
>
> I don't think anybody here is familiar enough with ODP to make
> suggestions.
>
> C
On 02/15/2017 03:28 PM, Maciek Konstantynowicz (mkonstan) wrote:
> Thomas, many thanks for sending this.
>
> Few comments and questions after reading the slides:
>
> 1. s3 clarification - host and data plane thread setup - vswitch pmd
> (data plane) thread placement
> a. "1PMD/core (4 core)”
From: Jan Scheurich
Sent: Thursday, 16 February, 2017 14:41
To: therb...@redhat.com
Cc: vpp-...@fd.io
Subject: Re: [vpp-dev] Interesting perf test results from Red Hat's test team
Hi Thomas,
Thanks for these interesting measurements. I am not quite su
-Original Message-
From: discuss-boun...@lists.fd.io [mailto:discuss-boun...@lists.fd.io] On
Behalf Of Vanessa Valderrama
Sent: Thursday, February 16, 2017 10:08 AM
To: disc...@lists.fd.io; t...@lists.fd.io; infra-steer...@lists.fd.io
Subject: [discuss] NOTIFICATION: Jenkins queue backed
-Original Message-
From: tsc-boun...@lists.fd.io [mailto:tsc-boun...@lists.fd.io] On Behalf Of
Vanessa Valderrama
Sent: Thursday, February 16, 2017 10:17 AM
To: disc...@lists.fd.io; t...@lists.fd.io; infra-steer...@lists.fd.io
Subject: Re: [tsc] NOTIFICATION: Jenkins queue backed up
I apo
Dear Vanessa,
Have we an estimate for how long the Jenkins queue will be effectively unusable
due to what appear to be 300 jobs in the queue?
How about spinning up extra slaves to clear the backlog?
Thanks... Dave
___
vpp-dev mailing list
vpp-dev@lis
Dave,
The backlog was due to the 300+ TREX jobs. After clearing the TREX
jobs, the queue is down to 47.
Thank you,
Vanessa
On 02/16/2017 09:23 AM, Dave Barach (dbarach) wrote:
>
> Dear Vanessa,
>
>
>
> Have we an estimate for how long the Jenkins queue will be effectively
> unusable due to w
FYI,
I have updated the VPP wiki to reflect the new default based on Damjan's
commit (16384).
I also removed the "(for example, 131072)" from the comment about
expanding the value for large numbers of interfaces because there is no
context specified for the example -- i.e. how many interface
On 02/16/2017 04:25 AM, Maciek Konstantynowicz (mkonstan) wrote:
> Karl, Thanks for all answers and clarifications!
> ~23Mpps that’s indeed your NIC - we’re observing same in CSIT.
> One more point re host setup - I read it’s CPU power management disabled,
> CPU TurboBoost disabled, and all memory
Jan,
I have answered below but am forwarding this to Karl who performed the
testing to get the exact answers.
--TFH
On 02/16/2017 08:59 AM, Jan Scheurich wrote:
*From:* Jan Scheurich
*Sent:* Thursday, 16 February, 2017 14:41
*To:* therb...@redhat.
Hi, John/all,
I’ve been able to make the RSS work, and got some sets of data. But the data
doesn’t look promising when compared with 1 RX queue case, i.e. even slower. My
deployment is running on bonding, and even in the case of 1 RX queue, I saw
three interfaces in total already in “show dpdk
On 02/15/2017 08:58 PM, Alec Hothan (ahothan) wrote:
>
>
> Great summary slides Karl, I have a few more questions on the slides.
>
>
>
> · Did you use OSP10/OSPD/ML2 to deploy your testpmd VM/configure
> the vswitch or is it direct launch using libvirt and direct config of
> the vswi
Looks like I was too optimistic when it comes to syscalls i was planning to use.
I was not able to get more than 3 Mpps so I switched to standard shared memory.
After a bit of tuning, I’m getting following results:
broadwell 3.2GHz, TurboBoost disabled:
IXIA - XL710-40G - VPP1 - MEMIF - VPP2 -
Excellent!
Thanks,
-daw-
On 02/16/2017 02:43 PM, Damjan Marion (damarion) wrote:
Looks like I was too optimistic when it comes to syscalls i was planning to use.
I was not able to get more than 3 Mpps so I switched to standard shared memory.
After a bit of tuning, I’m getting following results
Very Interesting...
Damjan,
Do you think if it makes sense to use virtio_user/vhost_user pairs to connect
two VPPs instances running
inside two container?
Essentially, the memif and virtio_user/vhost_user pairs both leverage shared
memory for fast inter-process
communication, within similar p
Hey,
I met the same issue when I using latest VPP code.
In VPP 17.01, CLI command “classify table mask l3 ip4 proto” works well.
Below is the error and bt log:
Could someone take a look at it? Thanks a lot.
DBGvpp# classify table match 1 mask l3 ip4 proto
Thread 1 "vpp_main" received signal SI
19 matches
Mail list logo