An update on the previous meeting regarding 2 vCPUs in the VM setup and 
configuration.
I have conducted the test with Phy-VPP-VM-VPP-Phy running testpmd in the VM 
with 2 vCPUs configuration.
With the scheduler optimization, I was able to achieve similar throughput 
performance with 2 vCPUs compared to 3 vCPUs with a delta of -5% in performance.

The testpmd application was running with 2 cores – 1 core for PMD and 1 core 
co-share between testpmd application and Linux OS process.

The VM boot parameter that I used was:
… nomodeset hugepagesz=2M hugepages=1024 isolcpus=1 processor.max_cstate=0

The testpmd command I used was:
./testpmd –c 0x3 -n 4 -- --burst=64 -i --txd=2048 --rxd=2048 --txqflags=0xf00 
--disable-hw-vlan

The scheduler optimization which I did was the CFS optimization for QEMU and 
VPP:
#chrt –r –p 1 <worker_pid for VPP>
#chrt –r –p 1 <VM qemu PMD pid>


From: Alec Hothan (ahothan) [mailto:ahot...@cisco.com]
Sent: Monday, December 12, 2016 12:33 PM
To: Damon Wang <damon.dev...@gmail.com>
Cc: Thomas F Herbert <therb...@redhat.com>; Maciek Konstantynowicz (mkonstan) 
<mkons...@cisco.com>; Andrew Theurer <atheu...@redhat.com>; Douglas Shakshober 
<dsh...@redhat.com>; csit-...@lists.fd.io; vpp-dev <vpp-dev@lists.fd.io>; 
Rashid Khan <rk...@redhat.com>; Liew, Irene <irene.l...@intel.com>; Karl Rister 
<kris...@redhat.com>
Subject: Re: [vpp-dev] [csit-dev] vHost user test scenarios for CSIT

Damon,

This is indeed an interesting use case along with the container option.
We use VPP as L2 xconnect in a VM as our loopback VM (instead of testpmd) and 
it is performing as well or better than testpmd. Would be a good L3 loopback 
option.
I’m not sure CSIT would want to handle this as it requires quite a bit of VM 
and system tuning. That is perhaps more appropriate for solution integration 
testing.

Regards,

  Alec



From: Damon Wang <damon.dev...@gmail.com<mailto:damon.dev...@gmail.com>>
Date: Monday, November 21, 2016 at 1:45 AM
To: "Alec Hothan (ahothan)" <ahot...@cisco.com<mailto:ahot...@cisco.com>>
Cc: Thomas F Herbert <therb...@redhat.com<mailto:therb...@redhat.com>>, "Maciek 
Konstantynowicz (mkonstan)" <mkons...@cisco.com<mailto:mkons...@cisco.com>>, 
Andrew Theurer <atheu...@redhat.com<mailto:atheu...@redhat.com>>, Douglas 
Shakshober <dsh...@redhat.com<mailto:dsh...@redhat.com>>, 
"csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>, vpp-dev 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>, Rashid Khan 
<rk...@redhat.com<mailto:rk...@redhat.com>>, "Liew, Irene" 
<irene.l...@intel.com<mailto:irene.l...@intel.com>>, Karl Rister 
<kris...@redhat.com<mailto:kris...@redhat.com>>
Subject: Re: [vpp-dev] [csit-dev] vHost user test scenarios for CSIT

About

VPP in guest with layer 2 and layer 3 vRouted traffic.

Does this mean vpp run in guest vm? There are lots needs for running vpp in a 
vm as a VNF, the point is testing vpp routing performance in a vm with 
vhost-user.

+-------------------------------------+
|                                     |
|                                     |
|        +--------------------+       |
|        |   VPP in Guest VM  |       |
|        |                    |       |
|        |     Routing form   |       |
|        |     eth0 to eth1   |       |
|        |                    |       |
|        ++-------+--+-------++       |
|         |       |  |       |        |
|         | eth0  |  |  eth1 |        |
|         |       |  |       |        |
|         +---+---+  +---+---+        |
|             |          |            |
|             |          |            |
|             |          |            |
|             |          |            |
|     +-------+----------+-------+    |
|     |                          |    |
|     |                          |    |
|     |   vSwitch, eg. OVS DPDK  |    |
|     |                          |    |
|     |                          |    |
+--+--+-------+---------+--------+-+--+
   |          |         |          |
   |  eth0    |         |   eth1   |
   |          |         |          |
   +----+-----+         +-----+----+
        |                     |
        |                     |
        |                     |
        |                     |
   +----+-----+         +-----+----+
   |          |         |          |
   |  eth0    |         |   eth1   |
   |          |         |          |
+--+----------+---------+----------+---+
|                                      |
|          Traffic Generator           |
|                                      |
|                                      |
+--------------------------------------+


2016-11-17 9:06 GMT+08:00 Alec Hothan (ahothan) 
<ahot...@cisco.com<mailto:ahot...@cisco.com>>:
Few comments inline…


On 11/16/16, 8:18 AM, 
"vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> on behalf of 
Thomas F Herbert" 
<vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> on behalf of 
therb...@redhat.com<mailto:therb...@redhat.com>> wrote:

    +Irene Liew from Intel

    On 11/15/2016 02:06 PM, Maciek Konstantynowicz (mkonstan) wrote:

    On 11 Nov 2016, at 13:58, Thomas F Herbert < 
<mailto:therb...@redhat.com<mailto:therb...@redhat.com>>therb...@redhat.com<mailto:therb...@redhat.com>>
 wrote:


    On 11/09/2016 07:39 AM, Maciek Konstantynowicz (mkonstan) wrote:


    Some inputs from my side with MK.


    On 8 Nov 2016, at 21:25, Thomas F Herbert 
<therb...@redhat.com<mailto:therb...@redhat.com>> wrote:

    All:

    Soliciting opinions from people as to vhost-user testing scenarios and 
guest modes in
    fd.io<http://fd.io> <http://fd.io/> CSIT testing of VPP - vhost-user.
    I will forward to this mailing list as well as summarize any additional 
feedback.

    I asked some people that happen to be here at OVSCON as well as some Red 
Hat and Intel people. I am also including some people that are involved in 
upstream vhost-user work in DPDK.
    So far, I have the following feedback with an attempt to condense feedback 
and to keep the list small. If I left out anything, let me know.

    In addition to the PVP tests done now with small packets.

We should standardize on a basic limited set of sizes: 64, IMIX, 1518 bytes 
(this can be extended if needed to the list defined in RFC-2455)


    Testpmd in guest is OK for now.


I’d like to suggest to define/document the testpmd config used for testing: 
testpmd options and config, VM (vcpu, RAM).
Having a testpmd image capable of auto-configuring itself on the virtual 
interfaces at init time would also be good to have.



    MK: vhost should be tested also with IRQ drivers, not only PMD, e.g. Linux 
guest with kernel IP routing. It’s done today in CSIT functional tests in VIRL 
(no testpmd there).



    Yes, as long as testPMD in guest is in the suite to maximize perf test


    Agree. testpmd is already used in csit perf tests with vhost.



    1 Add multiple VMs (How many?)





    MK: For performance test, we should aim for a box-full, so for 1vCPU VMs 
fill up all cores :)


This will depend on the testpmd settings (mostly number of vCPU).
I’d suggest a minimum of 10 chains (10 x PVP) and 2 networks per chain.



    2 Both multi-queue and single-queue




    MK: vhost single-queue for sure. vhost multi-queue seems to matter only to 
huge VMs that generate lots of traffic and coming close to overloading worker 
thread dealing with it.

+1 for both single and multi-queue


    3 Tests that cause the equivalent of multiple flows in OVS. Varying variety 
of traffic including layer 2 and layer 3 traffic.




    MK: Yes. Many flows is must.

    4 Multiple IF's (Guest or Host or Both?)




    MK: What do you mean by multiple IF’s (interfaces)? With multiple VMs we 
surely have multiple vhost interfaces, minimum 2 vhost interfaces per VM. What 
matters IMV is the ratio and speed between: i) physical interfaces 10GE, 40GE; 
and ii) vhost interfaces with
     slow or fast VMs. I suggest we work few scenarios covering both i) and 
ii), and number of VMs, based on use cases folks have.


Most deployments will have a limited number of physical interfaces per compute 
node. One interface or 2 bonded interfaces per compute node.
The number of vhost interfaces is going to be an order of magnitude larger. 
With the example of 10 VMs and 2 networks per VM, that’s 20 vhost interfaces 
for 1 phys interface.
Of course there might be special configs with very different requirements 
(large oversubscription of VMs, or larger number of phys interfaces) but I 
think the 10 x PVP with 20 vhost interfaces and 1 phys interface use case looks 
like a good starting point.


    I am copying this to Franck. I am not sure whether he was asking for 
multiple PHY PMDs or more then 2 IFs per guest. I think that multiple guests 
with 2 IFs each should be a pretty good test to start with.


    OK. Any more feedback here from anybody?



    The following might not be doable by 17.01 and if not consider the 
following as a wish list for future:

    1 vxLan tunneled traffic




    MK: Do you mean VXLAN on the wire, VPP (running in host) does VXLAN tunnel 
termination (VTEP) into L2BD, and then L2 switching into VMs via vhost? If so, 
that’s the most common requirement I hear from folks e.g. OPNFV/FDS.



    I am not sure whether Franck was suggesting VTEP or whether he wanted encap 
and decap of L3 vxlan or whether he was asking for forwarding rules in guest 
and not just layer 2 MAC forwarding.


We need to cover the openstack vxlan overlay case: VTEP in the vswitch, 
everythying below the vswitch is VxLAN traffic, everything above the VTEP is 
straight L2 forwarding to the vhost interfaces.



    OK. Any more feedback here from anybody?


    2 VPP in guest with layer 2 and layer 3 vRouted traffic.


    MK: What do you mean here? VPP in guest with dpdk-virtio (instead of 
testpmd), and VPP in host with vhost ?

    Yes, VPP in host. I think some folks are looking for a test that 
approximates a routing VNF but I am forwarding this for Franck's comment


    OK. Any more feedback here from anybody?

    3 Additional Overlay/Underlay: MPLS

    MK: MPLSoEthernet?, MPLSoGRE? VPNv4, VPNv6? Else?
    MK: L2oLISP, IPv4oLISP, IPv6oLISP.



    MPLSoEthernet


    But what VPP configuration - just MPLS label switching (LSR), or VPN edge 
(LER aka PE) ?



    I don't have the answer. Maybe Franck or Anita may want to comment.

    In general, the context for my comment is wrt to perf testing of VPP vs 
DPDK/OVS and other vSwitches/data planes. Current testing is optimized for 
multiple layer 2 flows. If we are passing and forwarding tunneled or encapped 
traffic in the VM, even if we don't
     terminate a VTEP, we are closer to real world VNF use cases, and may 
provide a better basis perf comparisons for Telcos and similar users.



On the OpenStack front, we need to stay focused first on L2 switching 
performance in the vswitch between physical interfaces, potentially virtual 
interfaces such as vxlan tunnels and vhost interfaces.

Thanks

   Alec



_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
https://lists.fd.io/mailman/listinfo/vpp-dev

_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to