Re: [vpp-dev] VPP build with external package ODP

2017-02-16 Thread Sreejith Surendran Nair
Hi All ,

Gentle Remainder. If  possible kindly request to please suggest if there is
someway to include the odp package files with VPP same way as dpdk. I am
using vpp_lite as platform to build vpp with odp.

I thought I can have an external script to build the odp package and copy
the library files to vpp install path.

Thanks & Regards,
Sreejith

On 15 February 2017 at 12:10, Sreejith Surendran Nair <
sreejith.surendrann...@linaro.org> wrote:

> Hi All,
>
> I am working on the VPP/ODP Integration project.
>
> As part of build procedure I am building the ODP package(bin,lib,include)
> before the VPP build and using ODP prefix option with VPP install path
> (vpp/build-root/install-vpp_lite-native/odp) to copy the ODP package
> files.
>
> Recently I observed there is also ODP packaged version available (
> http://deb.opendataplane.org/pool/main/o/odp-dpdk/)
>
> I was trying to check if there is a build script available in VPP which
> can be used to install the packaged version of ODP automatically to the VPP
> install path so manual copy can be be avoided.
>
> I have observed the VPP makefile option with "dpdk-install-dev" but could
> not get complete understanding how it works internally.
>
> Kindly request if anyone could suggest  if there is any option available.
>
> Thanks & Regards,
> Sreejith
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Interesting perf test results from Red Hat's test team

2017-02-16 Thread Maciek Konstantynowicz (mkonstan)
Karl, Thanks for all answers and clarifications!
~23Mpps that’s indeed your NIC - we’re observing same in CSIT.
One more point re host setup - I read it’s CPU power management disabled,
CPU TurboBoost disabled, and all memory channels populated?
I looks so, but wanted to recheck as not listed on your opening slide :)

-Maciek

> On 15 Feb 2017, at 21:55, Karl Rister  wrote:
> 
> On 02/15/2017 03:28 PM, Maciek Konstantynowicz (mkonstan) wrote:
>> Thomas, many thanks for sending this.
>> 
>> Few comments and questions after reading the slides:
>> 
>> 1. s3 clarification - host and data plane thread setup - vswitch pmd
>> (data plane) thread placement
>>a. "1PMD/core (4 core)” - HT (SMT) disabled, 4 phy cores used for
>> vswitch, each with data plane thread.
>>b. “2PMD/core (2 core)” - HT (SMT) enabled, 2 phy cores, 4 logical
>> cores used for vswitch, each with data plane thread.
>>c. in both cases each data plane thread handling a single interface
>> - 2* physical, 2* vhost => 4 threads, all busy.
> 
> Correct.
> 
>>d. in both cases frames are dropped by vswitch or in vring due to
>> vswitch not keeping up - IOW testpmd in kvm guest is not DUT.
> 
> That is the intent.
> 
>> 2. s3 question - vswitch setup - it is unclear what is the forwarding
>> mode of each vswitch, as only srcIp changed in flows
>>a. flow or MAC learning mode?
> 
> In OVS we program flow rules that pass bidirectional traffic between a
> physical and vhost port pair.
> 
>>b. port to port crossconnect?
> 
> In VPP we are using xconnect.
> 
>> 3. s3 comment - host and data plane thread setup
>>a. “2PMD/core (2 core)” case - thread placement may yield different
>> results 
>>- physical interface threads as siblings vs. 
>>- physical and virtual interface threads as siblings.
> 
> In both OVS and VPP a physical interface thread is paired with a virtual
> interface thread on the same core.
> 
>>b. "1PMD/core (4 core)” - one would expect these to be much higher
>> than “2PMD/core (2 core)”
>>- speculation: possibly due to "instruction load" imbalance
>> between threads.
>>- two types of thread with different "instruction load":
>> phy->vhost vs. vhost->phy
>>- "instruction load" = instr/pkt, instr/cycle (IPC efficiency).
>> 4. s4 comment - results look as expected for vpp
>> 5. s5 question - unclear why throughput doubled 
>>a. e.g. for vpp from "11.16 Mpps" to "22.03 Mpps"
>>b. if only queues increased, and cpu resources did not, or have they?
>> 6. s6 question - similar to point 5. - unclear cpu and thread reasources.
> 
> Queues and cores increase together.  In the host single queue used 4 PMD
> threads on 2 core, two queue uses 8 PMD threads on 4 cores, and three
> queue uses 12 PMD threads on 6 cores.  In the guest we used 2, 4, and 6
> cores in testpmd without using sibling hyperthreads in order to avoid
> bottlenecks in the guest.
> 
>> 7. s7 comment - anomaly for 3q (virtio multi-queue) for (srcMAc,dstMAC)
>>a. could be due to flow hashing inefficiency.
> 
> That was our thinking and where we were going to look first.
> 
> I think I have tracked the three queue issue to using too many mbufs for
> multi-queue as addressed in this commit:
> 
> https://github.com/vpp-dev/vpp/commit/a06dfb39c6bee3fbfd702c10e1e1416b98e65455
> 
> I originally used the suggestion of 131072 from this page:
> 
> https://wiki.fd.io/view/VPP/Command-line_Arguments#.22dpdk.22_parameters
> 
> I'm now testing with 3 queue and 32768 mbufs and getting in excess of 23
> Mpps across all the flow configurations except the one with the hashing
> issue.  For our hardware configuration we believe this is hardware
> limited and could potentially go faster (as mentioned on slide 6).
> 
>> 
>> -Maciek
>> 
>>> On 15 Feb 2017, at 17:34, Thomas F Herbert >> > wrote:
>>> 
>>> Here are test results on VPP 17.01 compared with OVS/DPDK 2.6/1611
>>> performed by Karl Rister of Red Hat.
>>> 
>>> This is PVP testing with 1, 2 and 3 queues. It is an interesting
>>> comparison with the CSIT results. Of particular interest is the drop
>>> off on the 3 queue results.
>>> 
>>> --TFH
>>> 
>>> 
>>> -- 
>>> *Thomas F Herbert*
>>> SDN Group
>>> Office of Technology
>>> *Red Hat*
>>> ___
>>> vpp-dev mailing list
>>> vpp-dev@lists.fd.io 
>>> https://lists.fd.io/mailman/listinfo/vpp-dev
>> 
> 
> 
> -- 
> Karl Rister 

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] VPP build with external package ODP

2017-02-16 Thread Damjan Marion

Dear Sreejith,

I don't think anybody here is familiar enough with ODP to make  suggestions.

Can you come up with some proposal ? Possibly reasonably detailed writeup on 
the mailing list?

Thanks,

Damjan


> On 16 Feb 2017, at 11:16, Sreejith Surendran Nair 
>  wrote:
> 
> Hi All ,
> 
> Gentle Remainder. If  possible kindly request to please suggest if there is 
> someway to include the odp package files with VPP same way as dpdk. I am 
> using vpp_lite as platform to build vpp with odp.
> 
> I thought I can have an external script to build the odp package and copy the 
> library files to vpp install path.
> 
> Thanks & Regards,
> Sreejith
> 
> On 15 February 2017 at 12:10, Sreejith Surendran Nair 
>  > wrote:
> Hi All,
> 
> I am working on the VPP/ODP Integration project. 
> 
> As part of build procedure I am building the ODP package(bin,lib,include) 
> before the VPP build and using ODP prefix option with VPP install path 
> (vpp/build-root/install-vpp_lite-native/odp) to copy the ODP package files.
> 
> Recently I observed there is also ODP packaged version available 
> (http://deb.opendataplane.org/pool/main/o/odp-dpdk/ 
> )
> 
> I was trying to check if there is a build script available in VPP which can 
> be used to install the packaged version of ODP automatically to the VPP 
> install path so manual copy can be be avoided.
> 
> I have observed the VPP makefile option with "dpdk-install-dev" but could not 
> get complete understanding how it works internally.
> 
> Kindly request if anyone could suggest  if there is any option available.
> 
> Thanks & Regards,
> Sreejith
> 
>  
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP build with external package ODP

2017-02-16 Thread Sreejith Surendran Nair
Hi Damjan,

Thanks a lot for the kind reply. Sure I will try to share more detailed
note on ODP and discuss.

Thanks & Regards,
Sreejith


On 16 February 2017 at 18:29, Damjan Marion  wrote:

>
> Dear Sreejith,
>
> I don't think anybody here is familiar enough with ODP to make
>  suggestions.
>
> Can you come up with some proposal ? Possibly reasonably detailed writeup
> on the mailing list?
>
> Thanks,
>
> Damjan
>
>
> On 16 Feb 2017, at 11:16, Sreejith Surendran Nair  linaro.org> wrote:
>
> Hi All ,
>
> Gentle Remainder. If  possible kindly request to please suggest if there
> is someway to include the odp package files with VPP same way as dpdk. I am
> using vpp_lite as platform to build vpp with odp.
>
> I thought I can have an external script to build the odp package and copy
> the library files to vpp install path.
>
> Thanks & Regards,
> Sreejith
>
> On 15 February 2017 at 12:10, Sreejith Surendran Nair <
> sreejith.surendrann...@linaro.org> wrote:
>
>> Hi All,
>>
>> I am working on the VPP/ODP Integration project.
>>
>> As part of build procedure I am building the ODP package(bin,lib,include)
>> before the VPP build and using ODP prefix option with VPP install path
>> (vpp/build-root/install-vpp_lite-native/odp) to copy the ODP package
>> files.
>>
>> Recently I observed there is also ODP packaged version available (
>> http://deb.opendataplane.org/pool/main/o/odp-dpdk/)
>>
>> I was trying to check if there is a build script available in VPP which
>> can be used to install the packaged version of ODP automatically to the VPP
>> install path so manual copy can be be avoided.
>>
>> I have observed the VPP makefile option with "dpdk-install-dev" but could
>> not get complete understanding how it works internally.
>>
>> Kindly request if anyone could suggest  if there is any option available.
>>
>> Thanks & Regards,
>> Sreejith
>>
>>
>>
>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Interesting perf test results from Red Hat's test team

2017-02-16 Thread Karl Rister
On 02/15/2017 03:28 PM, Maciek Konstantynowicz (mkonstan) wrote:
> Thomas, many thanks for sending this.
> 
> Few comments and questions after reading the slides:
> 
> 1. s3 clarification - host and data plane thread setup - vswitch pmd
> (data plane) thread placement
> a. "1PMD/core (4 core)” - HT (SMT) disabled, 4 phy cores used for
> vswitch, each with data plane thread.
> b. “2PMD/core (2 core)” - HT (SMT) enabled, 2 phy cores, 4 logical
> cores used for vswitch, each with data plane thread.
> c. in both cases each data plane thread handling a single interface
> - 2* physical, 2* vhost => 4 threads, all busy.

Correct.

> d. in both cases frames are dropped by vswitch or in vring due to
> vswitch not keeping up - IOW testpmd in kvm guest is not DUT.

That is the intent.

> 2. s3 question - vswitch setup - it is unclear what is the forwarding
> mode of each vswitch, as only srcIp changed in flows
> a. flow or MAC learning mode?

In OVS we program flow rules that pass bidirectional traffic between a
physical and vhost port pair.

> b. port to port crossconnect?

In VPP we are using xconnect.

> 3. s3 comment - host and data plane thread setup
> a. “2PMD/core (2 core)” case - thread placement may yield different
> results 
> - physical interface threads as siblings vs. 
> - physical and virtual interface threads as siblings.

In both OVS and VPP a physical interface thread is paired with a virtual
interface thread on the same core.

> b. "1PMD/core (4 core)” - one would expect these to be much higher
> than “2PMD/core (2 core)”
> - speculation: possibly due to "instruction load" imbalance
> between threads.
> - two types of thread with different "instruction load":
> phy->vhost vs. vhost->phy
> - "instruction load" = instr/pkt, instr/cycle (IPC efficiency).
> 4. s4 comment - results look as expected for vpp
> 5. s5 question - unclear why throughput doubled 
> a. e.g. for vpp from "11.16 Mpps" to "22.03 Mpps"
> b. if only queues increased, and cpu resources did not, or have they?
> 6. s6 question - similar to point 5. - unclear cpu and thread reasources.

Queues and cores increase together.  In the host single queue used 4 PMD
threads on 2 core, two queue uses 8 PMD threads on 4 cores, and three
queue uses 12 PMD threads on 6 cores.  In the guest we used 2, 4, and 6
cores in testpmd without using sibling hyperthreads in order to avoid
bottlenecks in the guest.

> 7. s7 comment - anomaly for 3q (virtio multi-queue) for (srcMAc,dstMAC)
> a. could be due to flow hashing inefficiency.

That was our thinking and where we were going to look first.

I think I have tracked the three queue issue to using too many mbufs for
multi-queue as addressed in this commit:

https://github.com/vpp-dev/vpp/commit/a06dfb39c6bee3fbfd702c10e1e1416b98e65455

I originally used the suggestion of 131072 from this page:

https://wiki.fd.io/view/VPP/Command-line_Arguments#.22dpdk.22_parameters

I'm now testing with 3 queue and 32768 mbufs and getting in excess of 23
Mpps across all the flow configurations except the one with the hashing
issue.  For our hardware configuration we believe this is hardware
limited and could potentially go faster (as mentioned on slide 6).

> 
> -Maciek
> 
>> On 15 Feb 2017, at 17:34, Thomas F Herbert > > wrote:
>>
>> Here are test results on VPP 17.01 compared with OVS/DPDK 2.6/1611
>> performed by Karl Rister of Red Hat.
>>
>> This is PVP testing with 1, 2 and 3 queues. It is an interesting
>> comparison with the CSIT results. Of particular interest is the drop
>> off on the 3 queue results.
>>
>> --TFH
>>
>>
>> -- 
>> *Thomas F Herbert*
>> SDN Group
>> Office of Technology
>> *Red Hat*
>> ___
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io 
>> https://lists.fd.io/mailman/listinfo/vpp-dev
> 


-- 
Karl Rister 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] FW: Re: Interesting perf test results from Red Hat's test team

2017-02-16 Thread Jan Scheurich

From: Jan Scheurich
Sent: Thursday, 16 February, 2017 14:41
To: therb...@redhat.com
Cc: vpp-...@fd.io
Subject: Re: [vpp-dev] Interesting perf test results from Red Hat's test team


Hi Thomas,

Thanks for these interesting measurements. I am not quite sure I fully 
understand the different configurations and traffic cases you have been testing:

*   Do you vary the number of vhost-user queues to the guest and/or the 
number of RX queues for the phy port?
*   Did you add cores at the same time you added queues?
*   When you say flows, do you mean L3/L4 packet flows (5-tuples) or 
forwarding rules/flow rules?
*   When you e.g. say N flows (srcip, dstip) do you mean matching on these 
fields, modifying these fields or both?

Would it be possible to provide the exact VPP and OVS configurations that were 
used (ports, queues, cores, ports, forwarding rules/flows)?

Thanks, Jan

[X]

Here are test results on VPP 17.01 compared with OVS/DPDK 2.6/1611
performed by Karl Rister of Red Hat.

This is PVP testing with 1, 2 and 3 queues. It is an interesting
comparison with the CSIT results. Of particular interest is the drop off
on the 3 queue results.

--TFH


--
*Thomas F Herbert*
SDN Group
Office of Technology
*Red Hat*
-- next part --
An HTML attachment was scrubbed...
URL: 

-- next part --
A non-text attachment was scrubbed...
Name: vpp-17.01_vs_ovs-2.6.pdf
Type: application/pdf
Size: 243918 bytes
Desc: not available
URL: 




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] FW: [discuss] NOTIFICATION: Jenkins queue backed up

2017-02-16 Thread Dave Barach (dbarach)

-Original Message-
From: discuss-boun...@lists.fd.io [mailto:discuss-boun...@lists.fd.io] On 
Behalf Of Vanessa Valderrama
Sent: Thursday, February 16, 2017 10:08 AM
To: disc...@lists.fd.io; t...@lists.fd.io; infra-steer...@lists.fd.io
Subject: [discuss] NOTIFICATION: Jenkins queue backed up

The Jenkins queue is backing up due to instances not being instantiated.  We 
have a high priority ticket with the vendor. I will provide an update as soon 
as I
have more information.  I apologize for the inconvenience.

Thank you,
Vanessa


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] FW: [tsc] NOTIFICATION: Jenkins queue backed up

2017-02-16 Thread Dave Barach (dbarach)
-Original Message-
From: tsc-boun...@lists.fd.io [mailto:tsc-boun...@lists.fd.io] On Behalf Of 
Vanessa Valderrama
Sent: Thursday, February 16, 2017 10:17 AM
To: disc...@lists.fd.io; t...@lists.fd.io; infra-steer...@lists.fd.io
Subject: Re: [tsc] NOTIFICATION: Jenkins queue backed up

I apologize for the miscommunication.  There was a vendor issue with
another project not FD.io.  It appears the issue with FD.io Jenkins
queue is due to the number of TREX jobs in the queue.  Instances are
instantiating as expected. 

Thank you,
Vanessa

On 02/16/2017 09:07 AM, Vanessa Valderrama wrote:
> The Jenkins queue is backing up due to instances not being instantiated.  We 
> have a high priority ticket with the vendor. I will provide an update as soon 
> as I
> have more information.  I apologize for the inconvenience.
>
> Thank you,
> Vanessa
>
>


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] Jenkins queue backup (executor starvation)

2017-02-16 Thread Dave Barach (dbarach)
Dear Vanessa,

Have we an estimate for how long the Jenkins queue will be effectively unusable 
due to what appear to be 300 jobs in the queue?

How about spinning up extra slaves to clear the backlog?

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Jenkins queue backup (executor starvation)

2017-02-16 Thread Vanessa Valderrama
Dave,

The backlog was due to the 300+ TREX jobs.  After clearing the TREX
jobs, the queue is down to 47. 

Thank you,
Vanessa

On 02/16/2017 09:23 AM, Dave Barach (dbarach) wrote:
>
> Dear Vanessa,
>
>  
>
> Have we an estimate for how long the Jenkins queue will be effectively
> unusable due to what appear to be 300 jobs in the queue?
>
>  
>
> How about spinning up extra slaves to clear the backlog?
>
>  
>
> Thanks… Dave
>
>  
>



signature.asc
Description: OpenPGP digital signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Interesting perf test results from Red Hat's test team

2017-02-16 Thread Dave Wallace

FYI,

I have updated the VPP wiki to reflect the new default based on Damjan's 
commit (16384).


I also removed the "(for example, 131072)" from the comment about 
expanding the value for large numbers of interfaces because there is no 
context specified for the example -- i.e. how many interfaces this value 
was tested with.


I will push a patch which fixes the comment in 
.../vpp/src/vpp/conf/startup.conf to reflect the current default.


Thanks,
-daw-

On 02/15/2017 04:55 PM, Karl Rister wrote:

I think I have tracked the three queue issue to using too many mbufs for
multi-queue as addressed in this commit:

https://github.com/vpp-dev/vpp/commit/a06dfb39c6bee3fbfd702c10e1e1416b98e65455

I originally used the suggestion of 131072 from this page:

https://wiki.fd.io/view/VPP/Command-line_Arguments#.22dpdk.22_parameters

I'm now testing with 3 queue and 32768 mbufs and getting in excess of 23
Mpps across all the flow configurations except the one with the hashing
issue.  For our hardware configuration we believe this is hardware
limited and could potentially go faster (as mentioned on slide 6).


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Interesting perf test results from Red Hat's test team

2017-02-16 Thread Karl Rister
On 02/16/2017 04:25 AM, Maciek Konstantynowicz (mkonstan) wrote:
> Karl, Thanks for all answers and clarifications!
> ~23Mpps that’s indeed your NIC - we’re observing same in CSIT.
> One more point re host setup - I read it’s CPU power management disabled,
> CPU TurboBoost disabled, and all memory channels populated?
> I looks so, but wanted to recheck as not listed on your opening slide :)

Correct on all counts.

> 
> -Maciek
> 
>> On 15 Feb 2017, at 21:55, Karl Rister  wrote:
>>
>> On 02/15/2017 03:28 PM, Maciek Konstantynowicz (mkonstan) wrote:
>>> Thomas, many thanks for sending this.
>>>
>>> Few comments and questions after reading the slides:
>>>
>>> 1. s3 clarification - host and data plane thread setup - vswitch pmd
>>> (data plane) thread placement
>>>a. "1PMD/core (4 core)” - HT (SMT) disabled, 4 phy cores used for
>>> vswitch, each with data plane thread.
>>>b. “2PMD/core (2 core)” - HT (SMT) enabled, 2 phy cores, 4 logical
>>> cores used for vswitch, each with data plane thread.
>>>c. in both cases each data plane thread handling a single interface
>>> - 2* physical, 2* vhost => 4 threads, all busy.
>>
>> Correct.
>>
>>>d. in both cases frames are dropped by vswitch or in vring due to
>>> vswitch not keeping up - IOW testpmd in kvm guest is not DUT.
>>
>> That is the intent.
>>
>>> 2. s3 question - vswitch setup - it is unclear what is the forwarding
>>> mode of each vswitch, as only srcIp changed in flows
>>>a. flow or MAC learning mode?
>>
>> In OVS we program flow rules that pass bidirectional traffic between a
>> physical and vhost port pair.
>>
>>>b. port to port crossconnect?
>>
>> In VPP we are using xconnect.
>>
>>> 3. s3 comment - host and data plane thread setup
>>>a. “2PMD/core (2 core)” case - thread placement may yield different
>>> results 
>>>- physical interface threads as siblings vs. 
>>>- physical and virtual interface threads as siblings.
>>
>> In both OVS and VPP a physical interface thread is paired with a virtual
>> interface thread on the same core.
>>
>>>b. "1PMD/core (4 core)” - one would expect these to be much higher
>>> than “2PMD/core (2 core)”
>>>- speculation: possibly due to "instruction load" imbalance
>>> between threads.
>>>- two types of thread with different "instruction load":
>>> phy->vhost vs. vhost->phy
>>>- "instruction load" = instr/pkt, instr/cycle (IPC efficiency).
>>> 4. s4 comment - results look as expected for vpp
>>> 5. s5 question - unclear why throughput doubled 
>>>a. e.g. for vpp from "11.16 Mpps" to "22.03 Mpps"
>>>b. if only queues increased, and cpu resources did not, or have they?
>>> 6. s6 question - similar to point 5. - unclear cpu and thread reasources.
>>
>> Queues and cores increase together.  In the host single queue used 4 PMD
>> threads on 2 core, two queue uses 8 PMD threads on 4 cores, and three
>> queue uses 12 PMD threads on 6 cores.  In the guest we used 2, 4, and 6
>> cores in testpmd without using sibling hyperthreads in order to avoid
>> bottlenecks in the guest.
>>
>>> 7. s7 comment - anomaly for 3q (virtio multi-queue) for (srcMAc,dstMAC)
>>>a. could be due to flow hashing inefficiency.
>>
>> That was our thinking and where we were going to look first.
>>
>> I think I have tracked the three queue issue to using too many mbufs for
>> multi-queue as addressed in this commit:
>>
>> https://github.com/vpp-dev/vpp/commit/a06dfb39c6bee3fbfd702c10e1e1416b98e65455
>>
>> I originally used the suggestion of 131072 from this page:
>>
>> https://wiki.fd.io/view/VPP/Command-line_Arguments#.22dpdk.22_parameters
>>
>> I'm now testing with 3 queue and 32768 mbufs and getting in excess of 23
>> Mpps across all the flow configurations except the one with the hashing
>> issue.  For our hardware configuration we believe this is hardware
>> limited and could potentially go faster (as mentioned on slide 6).
>>
>>>
>>> -Maciek
>>>
 On 15 Feb 2017, at 17:34, Thomas F Herbert >>> > wrote:

 Here are test results on VPP 17.01 compared with OVS/DPDK 2.6/1611
 performed by Karl Rister of Red Hat.

 This is PVP testing with 1, 2 and 3 queues. It is an interesting
 comparison with the CSIT results. Of particular interest is the drop
 off on the 3 queue results.

 --TFH


 -- 
 *Thomas F Herbert*
 SDN Group
 Office of Technology
 *Red Hat*
 ___
 vpp-dev mailing list
 vpp-dev@lists.fd.io 
 https://lists.fd.io/mailman/listinfo/vpp-dev
>>>
>>
>>
>> -- 
>> Karl Rister 
> 


-- 
Karl Rister 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] FW: Re: Interesting perf test results from Red Hat's test team

2017-02-16 Thread Thomas F Herbert


Jan,
I have answered below but am forwarding this to Karl who performed the 
testing to get the exact answers.


--TFH

On 02/16/2017 08:59 AM, Jan Scheurich wrote:


*From:* Jan Scheurich
*Sent:* Thursday, 16 February, 2017 14:41
*To:* therb...@redhat.com
*Cc:* vpp-...@fd.io
*Subject:* Re: [vpp-dev] Interesting perf test results from Red Hat's 
test team

Hi Thomas,
Thanks for these interesting measurements. I am not quite sure I fully 
understand the different configurations and traffic cases you have 
been testing:


  * Do you vary the number of vhost-user queues to the guest and/or
the number of RX queues for the phy port?


These are vhost user queues. Because OVS and/or VPP is running in the host.


  * Did you add cores at the same time you added queues?


Yes


  * When you say flows, do you mean L3/L4 packet flows (5-tuples) or
forwarding rules/flow rules?


These are L2 matches.


  * When you e.g. say N flows (srcip, dstip) do you mean matching on
these fields, modifying these fields or both

Would it be possible to provide the exact VPP and OVS configurations 
that were used (ports, queues, cores, ports, forwarding rules/flows)?

Thanks, Jan
Here are test results on VPP 17.01 compared with OVS/DPDK 2.6/1611
performed by Karl Rister of Red Hat.
This is PVP testing with 1, 2 and 3 queues. It is an interesting
comparison with the CSIT results. Of particular interest is the drop off
on the 3 queue results.
--TFH
--
*Thomas F Herbert*
SDN Group
Office of Technology
*Red Hat*
-- next part --
An HTML attachment was scrubbed...
URL: 
<_http://lists.fd.io/pipermail/vpp-dev/attachments/20170215/83249b21/attachment-0001.html_>

-- next part --
A non-text attachment was scrubbed...
Name: vpp-17.01_vs_ovs-2.6.pdf
Type: application/pdf
Size: 243918 bytes
Desc: not available
URL: 
<_http://lists.fd.io/pipermail/vpp-dev/attachments/20170215/83249b21/attachment-0001.pdf_>



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


--
*Thomas F Herbert*
SDN Group
Office of Technology
*Red Hat*
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Use RSS in VPP 17.01

2017-02-16 Thread Yichen Wang (yicwang)
Hi, John/all,

I’ve been able to make the RSS work, and got some sets of data. But the data 
doesn’t look promising when compared with 1 RX queue case, i.e. even slower. My 
deployment is running on bonding, and even in the case of 1 RX queue, I saw 
three interfaces in total already in “show dpdk interface placement”, 2 
physical + 1 bonding. So if I want to set up with 2 RX queue, do I have to:

(1) Set the num-rx-queues to 2 for all my 3 interfaces including bonding, 
i.e. I am expected to see 6 interfaces in total in “show dpdk interface 
placement”? How does VPP work internally when bonding interface is created?

(2) In the loopback VM, is it a must to have 2 RX queues as well, or 
actually the loopback VM doesn’t really matter.

Thanks very much!

Regards,
Yichen

From: "Yichen Wang (yicwang)" 
Date: Monday, February 13, 2017 at 23:39
To: "John Lo (loj)" , "vpp-dev@lists.fd.io" 

Cc: "Ian Wells (iawells)" 
Subject: Re: [vpp-dev] Use RSS in VPP 17.01

Ok, figured it out. I need a larger num-mbufs when RSS is enabled…

Regards,
Yichen

From:  on behalf of "Yichen Wang (yicwang)" 

Date: Monday, February 13, 2017 at 17:23
To: "John Lo (loj)" , "vpp-dev@lists.fd.io" 

Cc: "Ian Wells (iawells)" 
Subject: Re: [vpp-dev] Use RSS in VPP 17.01

Hi, John/all,

Thanks for your pointer, and I am able to bring up my VPP with multiple queues!

I am doing a PVP test, which basically I am expecting Traffic Generator -> VPP 
on Host -> Loopback VM (testpmd) -> VPP on Host -> Traffic Generator. I can see 
the packets are delivered to the Loopback VM with no problem, but:

(1)   VPP shows all packets are dropped:

VirtualEthernet0/0/2  8 up   tx packets   
3237064

 tx bytes   
194223840

 drops3237051

But I did check testpmd and it got all packets, and does its job correctly by 
forwarding the packets to the other interfaces;

(2)   VPP show err:

   CountNode  Reason

692521171vhost-user-inputno available buffer

Why it says “no available buffer”? It works pretty well without RSS,. Did I 
miss anything?

Thanks very much for your helps!

Regards,
Yichen

From: "John Lo (loj)" 
Date: Thursday, February 9, 2017 at 20:09
To: "Yichen Wang (yicwang)" , "vpp-dev@lists.fd.io" 

Cc: "Ian Wells (iawells)" 
Subject: RE: Use RSS in VPP 17.01

For VPP, the number of queues on a device can be specified in the DPDK portion 
of the startup config, which default to 1. This is usually documented in as 
comments in the startup.conf template file when installing VPP rpm/deb on the 
target Linux OS. Following is the dpdk portion from the startup.conf in 
/etc/vpp/ directory after installing the vpp deb packages on my Ubuntu server:

dpdk {
## Change default settings for all intefaces
# dev default {
   ## Number of receive queues, enables RSS
   ## Default is 1
   # num-rx-queues 3

   ## Number of transmit queues, Default is equal
   ## to number of worker threads or 1 if no workers treads
   # num-tx-queues 3

   ## Number of descriptors in transmit and receive rings
   ## increasing or reducing number can impact performance
   ## Default is 1024 for both rx and tx
   # num-rx-desc 512
   # num-tx-desc 512

   ## VLAN strip offload mode for interface
   ## Default is off
   # vlan-strip-offload on
# }

## Whitelist specific interface by specifying PCI address
# dev :02:00.0

## Whitelist specific interface by specifying PCI address and in
## addition specify custom parameters for this interface
# dev :02:00.1 {
#   num-rx-queues 2
# }

## Change UIO driver used by VPP, Options are: uio_pci_generic, vfio-pci
## and igb_uio (default)
# uio-driver uio_pci_generic

## Disable mutli-segment buffers, improves performance but
## disables Jumbo MTU support
# no-multi-seg

## Increase number of buffers allocated, needed only in scenarios with
## large number of interfaces and worker threads. Value is per CPU 
socket.
## Default is 32768
# num-mbufs 128000

## Change hugepages allocation per-socket, needed only if there is need 
for
## larger number of mbufs. Default is 256M on each detected CPU socket
# socket-mem 2048,2048
}

Regards,
John

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Yichen Wang (yicwang)
Sent: Thursday, February 09, 2017 10:38 PM
To: vpp-dev@lists.fd.io
Cc: Ian Wells (iawells) 
Subject: [vpp-dev] Use RSS in VPP 17.01

Hi, VPP folks,

From what I saw on the VPP docs, there are some places do mention that VPP 
supports RSS. Like the example 

Re: [vpp-dev] Interesting perf test results from Red Hat's test team

2017-02-16 Thread Karl Rister
On 02/15/2017 08:58 PM, Alec Hothan (ahothan) wrote:
>  
> 
> Great summary slides Karl, I have a few more questions on the slides.
> 
>  
> 
> · Did you use OSP10/OSPD/ML2 to deploy your testpmd VM/configure
> the vswitch or is it direct launch using libvirt and direct config of
> the vswitches? (this is a bit related to Maciek’s question on the exact
> interface configs in the vswitch)

There was no use of OSP in these tests, the guest is launched via
libvirt and the vswitches are manually launched and configured with
shell scripts.

> 
> · Unclear if all the charts results were measured using 4 phys
> cores (no HT) or 2 phys cores (4 threads with HT)

Only the slide 3 has any 4 core (no HT) data, all other data is captured
using HT on the appropriate number of cores: 2 for single queue, 4 for
two queue, and 6 for three queue.

> 
> · How do you report your pps? ;-) Are those
> 
> o   vswitch centric (how many packets the vswitch forwards per second
> coming from traffic gen and from VMs)
> 
> o   or traffic gen centric aggregated TX (how many pps are sent by the
> traffic gen on both interfaces)
> 
> o   or traffic gen centric aggregated TX+RX (how many pps are sent and
> received by the traffic gen on both interfaces)

The pps is the bi-directional sum of the packets received back at the
traffic generator.

> 
> · From the numbers shown, it looks like it is the first or the last
> 
> · Unidirectional or symmetric bi-directional traffic?

symmetric bi-directional

> 
> · BIOS Turbo boost enabled or disabled?

disabled

> 
> · How many vcpus running the testpmd VM?

3, 5, or 7.  1 VCPU for house keeping and then 2 VCPUs for each queue
configuration.  Only the required VCPUs are active for any
configuration, so the VCPU count varies depending on the configuration
being tested.

> 
> · How do you range the combinations in your 1M flows src/dest
> MAC? I’m not aware about any real NFV cloud deployment/VNF that handles
> that type of flow pattern, do you?

We increment all the fields being modified by one for each packet until
we hit a million and then we restart at the base value and repeat.  So
all IPs and/or MACs get modified in unison.

We actually arrived at the srcMac,dstMac configuration in a backwards
manner.  On one of our systems where we develop the traffic generator we
were getting an error when doing srcMac,dstMac,srcIp,dstIp that we
couldn't figure out in the time needed for this work so we were going to
just go with srcMac,dstMac due to time constraints.  However, on the
system where we actually did the testing both worked so I just collected
both out of curiosity.

> 
>  
> 
> Thanks
> 
>  
> 
>   Alec
> 
>  
> 
>  
> 
> *From: * on behalf of "Maciek
> Konstantynowicz (mkonstan)" 
> *Date: *Wednesday, February 15, 2017 at 1:28 PM
> *To: *Thomas F Herbert 
> *Cc: *Andrew Theurer , Douglas Shakshober
> , "csit-...@lists.fd.io" ,
> vpp-dev , Karl Rister 
> *Subject: *Re: [vpp-dev] Interesting perf test results from Red
> Hat's test team
> 
>  
> 
> Thomas, many thanks for sending this.
> 
>  
> 
> Few comments and questions after reading the slides:
> 
>  
> 
> 1. s3 clarification - host and data plane thread setup - vswitch pmd
> (data plane) thread placement
> 
> a. "1PMD/core (4 core)” - HT (SMT) disabled, 4 phy cores used
> for vswitch, each with data plane thread.
> 
> b. “2PMD/core (2 core)” - HT (SMT) enabled, 2 phy cores, 4
> logical cores used for vswitch, each with data plane thread.
> 
> c. in both cases each data plane thread handling a single
> interface - 2* physical, 2* vhost => 4 threads, all busy.
> 
> d. in both cases frames are dropped by vswitch or in vring due
> to vswitch not keeping up - IOW testpmd in kvm guest is not DUT.
> 
> 2. s3 question - vswitch setup - it is unclear what is the
> forwarding mode of each vswitch, as only srcIp changed in flows
> 
> a. flow or MAC learning mode?
> 
> b. port to port crossconnect?
> 
> 3. s3 comment - host and data plane thread setup
> 
> a. “2PMD/core (2 core)” case - thread placement may yield
> different results 
> 
> - physical interface threads as siblings vs. 
> 
> - physical and virtual interface threads as siblings.
> 
> b. "1PMD/core (4 core)” - one would expect these to be much
> higher than “2PMD/core (2 core)”
> 
> - speculation: possibly due to "instruction load" imbalance
> between threads.
> 
> - two types of thread with different "instruction load":
> phy->vhost vs. vhost->phy
> 
> - "instruction load" = instr/pkt, instr/cycle (IPC efficiency).
> 
> 4. s4 comment - results look as expected for vpp
> 
> 5. s5 question - unclear why throughput doubled 
> 
> a. e.g. for vpp from "11.16 Mpps" to "22.03 Mpps"
> 
>   

Re: [vpp-dev] memif - packet memory interface

2017-02-16 Thread Damjan Marion (damarion)

Looks like I was too optimistic when it comes to syscalls i was planning to use.
I was not able to get more than 3 Mpps so I switched to standard shared memory.

After a bit of tuning, I’m getting following results:

broadwell 3.2GHz, TurboBoost disabled:

IXIA - XL710-40G - VPP1 - MEMIF - VPP2 - XL710-40G - IXIA

Both VPP instances are running single-core.
So it is symetrical setup where each VPP is forwarding between physical NIC and 
MEMIF.

With 64B packets, I’m getting 13.6 Mpps aggregate throughput.
With 1500B packets, I’m getting around 29Gbps.

Good thing with this new setup, both VPPs can be inside un-priviledged 
containers.

New code is in gerrit...


> On 14 Feb 2017, at 14:21, Damjan Marion (damarion)  wrote:
> 
> 
> I got first pings running over new shared memory interface driver.
> Code [1] is still very fragile, but basic packet forwarding works ...
> 
> This interface defines master/slave relationship.
> 
> Some characteristics:
> - slave can run inside un-privileged containers
> - master can run inside container, but it requires global PID namespace and 
> PTRACE capability
> - initial connection is done over the unix socket, so for container 
> networking socket file needs to be mapped into container
> - slave allocates shared memory for descriptor rings and passes FD to master
> - slave is ring producer for both tx and rx, it fills rings with either full 
> or empty buffers
> - master is ring consumer, it reads descriptors and executes memcpy from/to 
> buffer
> - process_vm_readv, process_vm_writev linux system calls are used for copy of 
> data directly between master and slave VM (it avoids 2nd memcpy)
> - process_vm_* system calls are executed once per vector of packets
> - from security perspective, slave doesn’t have access to master memory
> - currently polling-only
> - reconnection should just work - slave runs reconnect process in case when 
> master disappears
> 
> TODO:
> - multi-queue
> - interrupt mode (likely simple byte read/write to file descriptor)
> - lightweight library to be used for non-VPP clients
> - L3 mode ???
> - perf tuning
> - user-mode memcpy - master maps slave buffer memory directly…
> - docs / specification
> 
> At this point I would really like to hear feedback from people,
> specially from the usability side.
> 
> config is basically:
> 
> create memif socket /path/to/unix_socket.file [master|slave]
> set int state memif0 up
> 
> DBGvpp# show interfaces
>  Name   Idx   State  Counter  
> Count
> local00down
> memif01 up
> DBGvpp# show interfaces address
> local0 (dn):
> memif0 (up):
>  172.16.0.2/24
> DBGvpp# ping 172.16.0.1
> 64 bytes from 172.16.0.1: icmp_seq=1 ttl=64 time=18.4961 ms
> 64 bytes from 172.16.0.1: icmp_seq=2 ttl=64 time=18.4282 ms
> 64 bytes from 172.16.0.1: icmp_seq=3 ttl=64 time=26.4333 ms
> 64 bytes from 172.16.0.1: icmp_seq=4 ttl=64 time=18.4255 ms
> 64 bytes from 172.16.0.1: icmp_seq=5 ttl=64 time=14.4133 ms
> 
> Statistics: 5 sent, 5 received, 0% packet loss
> DBGvpp# show interfaces
>  Name   Idx   State  Counter  
> Count
> local00down
> memif01 up   rx packets   
>   5
> rx bytes  
>490
> tx packets
>  5
> tx bytes  
>490
> drops 
>  5
> ip4   
>  5
> 
> 
> 
> 
> [1] https://gerrit.fd.io/r/#/c/5004/
> 
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] memif - packet memory interface

2017-02-16 Thread Dave Wallace

Excellent!

Thanks,
-daw-

On 02/16/2017 02:43 PM, Damjan Marion (damarion) wrote:

Looks like I was too optimistic when it comes to syscalls i was planning to use.
I was not able to get more than 3 Mpps so I switched to standard shared memory.

After a bit of tuning, I’m getting following results:

broadwell 3.2GHz, TurboBoost disabled:

IXIA - XL710-40G - VPP1 - MEMIF - VPP2 - XL710-40G - IXIA

Both VPP instances are running single-core.
So it is symetrical setup where each VPP is forwarding between physical NIC and 
MEMIF.

With 64B packets, I’m getting 13.6 Mpps aggregate throughput.
With 1500B packets, I’m getting around 29Gbps.

Good thing with this new setup, both VPPs can be inside un-priviledged 
containers.

New code is in gerrit...



On 14 Feb 2017, at 14:21, Damjan Marion (damarion)  wrote:


I got first pings running over new shared memory interface driver.
Code [1] is still very fragile, but basic packet forwarding works ...

This interface defines master/slave relationship.

Some characteristics:
- slave can run inside un-privileged containers
- master can run inside container, but it requires global PID namespace and 
PTRACE capability
- initial connection is done over the unix socket, so for container networking 
socket file needs to be mapped into container
- slave allocates shared memory for descriptor rings and passes FD to master
- slave is ring producer for both tx and rx, it fills rings with either full or 
empty buffers
- master is ring consumer, it reads descriptors and executes memcpy from/to 
buffer
- process_vm_readv, process_vm_writev linux system calls are used for copy of 
data directly between master and slave VM (it avoids 2nd memcpy)
- process_vm_* system calls are executed once per vector of packets
- from security perspective, slave doesn’t have access to master memory
- currently polling-only
- reconnection should just work - slave runs reconnect process in case when 
master disappears

TODO:
- multi-queue
- interrupt mode (likely simple byte read/write to file descriptor)
- lightweight library to be used for non-VPP clients
- L3 mode ???
- perf tuning
- user-mode memcpy - master maps slave buffer memory directly…
- docs / specification

At this point I would really like to hear feedback from people,
specially from the usability side.

config is basically:

create memif socket /path/to/unix_socket.file [master|slave]
set int state memif0 up

DBGvpp# show interfaces
  Name   Idx   State  Counter  Count
local00down
memif01 up
DBGvpp# show interfaces address
local0 (dn):
memif0 (up):
  172.16.0.2/24
DBGvpp# ping 172.16.0.1
64 bytes from 172.16.0.1: icmp_seq=1 ttl=64 time=18.4961 ms
64 bytes from 172.16.0.1: icmp_seq=2 ttl=64 time=18.4282 ms
64 bytes from 172.16.0.1: icmp_seq=3 ttl=64 time=26.4333 ms
64 bytes from 172.16.0.1: icmp_seq=4 ttl=64 time=18.4255 ms
64 bytes from 172.16.0.1: icmp_seq=5 ttl=64 time=14.4133 ms

Statistics: 5 sent, 5 received, 0% packet loss
DBGvpp# show interfaces
  Name   Idx   State  Counter  Count
local00down
memif01 up   rx packets 
5
 rx bytes   
  490
 tx packets 
5
 tx bytes   
  490
 drops  
5
 ip4
5




[1] https://gerrit.fd.io/r/#/c/5004/


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] memif - packet memory interface

2017-02-16 Thread Zhou, Danny
Very Interesting...

Damjan,

Do you think if it makes sense to use virtio_user/vhost_user pairs to connect 
two VPPs instances running
inside two container? 

Essentially, the memif and virtio_user/vhost_user pairs both leverage shared 
memory for fast inter-process
communication, within similar performance and same isolation/security concern, 
but the later one obviously 
is realistic standard.

-Danny

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Damjan Marion (damarion)
Sent: Friday, February 17, 2017 3:44 AM
To: vpp-dev 
Subject: Re: [vpp-dev] memif - packet memory interface


Looks like I was too optimistic when it comes to syscalls i was planning to use.
I was not able to get more than 3 Mpps so I switched to standard shared memory.

After a bit of tuning, I’m getting following results:

broadwell 3.2GHz, TurboBoost disabled:

IXIA - XL710-40G - VPP1 - MEMIF - VPP2 - XL710-40G - IXIA

Both VPP instances are running single-core.
So it is symetrical setup where each VPP is forwarding between physical NIC and 
MEMIF.

With 64B packets, I’m getting 13.6 Mpps aggregate throughput.
With 1500B packets, I’m getting around 29Gbps.

Good thing with this new setup, both VPPs can be inside un-priviledged 
containers.

New code is in gerrit...


> On 14 Feb 2017, at 14:21, Damjan Marion (damarion)  wrote:
> 
> 
> I got first pings running over new shared memory interface driver.
> Code [1] is still very fragile, but basic packet forwarding works ...
> 
> This interface defines master/slave relationship.
> 
> Some characteristics:
> - slave can run inside un-privileged containers
> - master can run inside container, but it requires global PID 
> namespace and PTRACE capability
> - initial connection is done over the unix socket, so for container 
> networking socket file needs to be mapped into container
> - slave allocates shared memory for descriptor rings and passes FD to 
> master
> - slave is ring producer for both tx and rx, it fills rings with 
> either full or empty buffers
> - master is ring consumer, it reads descriptors and executes memcpy 
> from/to buffer
> - process_vm_readv, process_vm_writev linux system calls are used for 
> copy of data directly between master and slave VM (it avoids 2nd 
> memcpy)
> - process_vm_* system calls are executed once per vector of packets
> - from security perspective, slave doesn’t have access to master 
> memory
> - currently polling-only
> - reconnection should just work - slave runs reconnect process in case 
> when master disappears
> 
> TODO:
> - multi-queue
> - interrupt mode (likely simple byte read/write to file descriptor)
> - lightweight library to be used for non-VPP clients
> - L3 mode ???
> - perf tuning
> - user-mode memcpy - master maps slave buffer memory directly…
> - docs / specification
> 
> At this point I would really like to hear feedback from people, 
> specially from the usability side.
> 
> config is basically:
> 
> create memif socket /path/to/unix_socket.file [master|slave] set int 
> state memif0 up
> 
> DBGvpp# show interfaces
>  Name   Idx   State  Counter  
> Count
> local00down
> memif01 up
> DBGvpp# show interfaces address
> local0 (dn):
> memif0 (up):
>  172.16.0.2/24
> DBGvpp# ping 172.16.0.1
> 64 bytes from 172.16.0.1: icmp_seq=1 ttl=64 time=18.4961 ms
> 64 bytes from 172.16.0.1: icmp_seq=2 ttl=64 time=18.4282 ms
> 64 bytes from 172.16.0.1: icmp_seq=3 ttl=64 time=26.4333 ms
> 64 bytes from 172.16.0.1: icmp_seq=4 ttl=64 time=18.4255 ms
> 64 bytes from 172.16.0.1: icmp_seq=5 ttl=64 time=14.4133 ms
> 
> Statistics: 5 sent, 5 received, 0% packet loss DBGvpp# show interfaces
>  Name   Idx   State  Counter  
> Count
> local00down
> memif01 up   rx packets   
>   5
> rx bytes  
>490
> tx packets
>  5
> tx bytes  
>490
> drops 
>  5
> ip4   
>  5
> 
> 
> 
> 
> [1] https://gerrit.fd.io/r/#/c/5004/
> 
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] problem in classify command in vpp

2017-02-16 Thread Ni, Hongjun
Hey,

I met the same issue when I using latest VPP code.
In VPP 17.01, CLI command “classify table mask l3 ip4 proto” works well.

Below is the error and bt log:
Could someone take a look at it?  Thanks a lot.

DBGvpp# classify table match 1 mask l3 ip4 proto

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0x768e67b4 in vnet_classify_new_table (cm=0x77608320 
, mask=0x1 , nbuckets=2, memory_size=2097152, skip_n_vectors=1, 
match_n_vectors=1)
at /root/vpp/build-data/../src/vnet/classify/vnet_classify.c:118


(gdb) bt
#0  0x768e67b4 in vnet_classify_new_table (cm=0x77608320 
, mask=0x1 , nbuckets=2, memory_size=2097152, skip_n_vectors=1, 
match_n_vectors=1)
at /root/vpp/build-data/../src/vnet/classify/vnet_classify.c:118
#1  0x768e98ad in vnet_classify_add_del_table (cm=0x77608320 
, mask=0x1 , nbuckets=2, memory_size=2097152, skip=1, 
match=1, next_table_index=4294967295,
miss_next_index=4294967295, table_index=0x7fffb6089b70, current_data_flag=0 
'\000', current_data_offset=0, is_add\
=1, del_chain=0) at 
/root/vpp/build-data/../src/vnet/classify/vnet_classify.c:675
#2  0x768ec6cd in classify_table_command_fn (vm=0x77999340 
, input=0x7fffb6089ef0, cmd=\
0x7fffb6060e30) at 
/root/vpp/build-data/../src/vnet/classify/vnet_classify.c:1461
#3  0x776ccb08 in vlib_cli_dispatch_sub_commands (vm=0x77999340 
, cm=0x779995a8 , input=0x7fffb6089ef0, parent_command_index=393) at 
/root/vpp/build-data/../src/vlib/cli.c:484
#4  0x776cca16 in vlib_cli_dispatch_sub_commands (vm=0x77999340 
, cm=0x779995a8 , input=0x7fffb6089ef0, parent_command_index=0) at 
/root/vpp/build-data/../src/vlib/cli.c:462
#5  0x776ccded in vlib_cli_input (vm=0x77999340 , 
input=0x7fffb6089ef0, function=0x7fff\
f7755b97 , function_arg=0) at 
/root/vpp/build-data/../src/vlib/cli.c:558
#6  0x7775a3b7 in unix_cli_process_input (cm=0x779991a0 
, cli_file_index=0) at /root/vpp/b\
uild-data/../src/vlib/unix/cli.c:2033
#7  0x7775ae1e in unix_cli_process (vm=0x77999340 
, rt=0x7fffb6079000, f=0x0) at /root/\
vpp/build-data/../src/vlib/unix/cli.c:2130
#8  0x776f3aa3 in vlib_process_bootstrap (_a=140736230333104) at 
/root/vpp/build-data/../src/vlib/main.c:1218
#9  0x75fd2668 in clib_calljmp () at 
/root/vpp/build-data/../src/vppinfra/longjmp.S:110
#10 0x7fffb5041a80 in ?? ()
#11 0x776f3bce in vlib_process_startup (vm=0x7fffe260, p=0x406220 
<_start>, f=0x77999340 ) at /root/vpp/build-data/../src/vlib/main.c:1240
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

Thanks,
Hongjun

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Siamak Abdollahzade via vpp-dev
Sent: Wednesday, February 15, 2017 2:57 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] problem in classify command in vpp

Hi all.

I am new in VPP and I'm trying to use classify command to filter specific 
traffic. I've followed [vpp-dev] Question: Classification with hex 
mask. At first, I 
entered this command:

vpp# classify table mask l3 ip4 src dst proto

the result was:
classify table: match count required

I've done some research around this error, but I couldn't find anything 
helpful. So, I was wondering if there is a manual for this command? or can you 
tell me what the problem is? I also tried to enter an integer number as match 
and I got this error:
exec error: Misc

thanks.

[vpp-dev] Question: Classification with hex mask



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev