Few comments inline…

On 11/16/16, 8:18 AM, "vpp-dev-boun...@lists.fd.io on behalf of Thomas F 
Herbert" <vpp-dev-boun...@lists.fd.io on behalf of therb...@redhat.com> wrote:

    +Irene Liew from Intel
    
    On 11/15/2016 02:06 PM, Maciek Konstantynowicz (mkonstan) wrote:
    
    On 11 Nov 2016, at 13:58, Thomas F Herbert < 
<mailto:therb...@redhat.com>therb...@redhat.com> wrote:
    
    
    On 11/09/2016 07:39 AM, Maciek Konstantynowicz (mkonstan) wrote:
    
    
    Some inputs from my side with MK.
    
    
    On 8 Nov 2016, at 21:25, Thomas F Herbert <therb...@redhat.com> wrote:
    
    All:
    
    Soliciting opinions from people as to vhost-user testing scenarios and 
guest modes in
    fd.io <http://fd.io/> CSIT testing of VPP - vhost-user.
    I will forward to this mailing list as well as summarize any additional 
feedback.
    
    I asked some people that happen to be here at OVSCON as well as some Red 
Hat and Intel people. I am also including some people that are involved in 
upstream vhost-user work in DPDK.
    So far, I have the following feedback with an attempt to condense feedback 
and to keep the list small. If I left out anything, let me know.
    
    In addition to the PVP tests done now with small packets.

We should standardize on a basic limited set of sizes: 64, IMIX, 1518 bytes 
(this can be extended if needed to the list defined in RFC-2455)

    
    Testpmd in guest is OK for now.
    
    
I’d like to suggest to define/document the testpmd config used for testing: 
testpmd options and config, VM (vcpu, RAM).
Having a testpmd image capable of auto-configuring itself on the virtual 
interfaces at init time would also be good to have.

    
    
    MK: vhost should be tested also with IRQ drivers, not only PMD, e.g. Linux 
guest with kernel IP routing. It’s done today in CSIT functional tests in VIRL 
(no testpmd there).
    
    
    
    Yes, as long as testPMD in guest is in the suite to maximize perf test
    
    
    Agree. testpmd is already used in csit perf tests with vhost.
    
    
    
    1 Add multiple VMs (How many?)
    
    
    
    
    
    MK: For performance test, we should aim for a box-full, so for 1vCPU VMs 
fill up all cores :)


This will depend on the testpmd settings (mostly number of vCPU).
I’d suggest a minimum of 10 chains (10 x PVP) and 2 networks per chain.


    
    2 Both multi-queue and single-queue
    
    
    
    
    MK: vhost single-queue for sure. vhost multi-queue seems to matter only to 
huge VMs that generate lots of traffic and coming close to overloading worker 
thread dealing with it.

+1 for both single and multi-queue

    
    3 Tests that cause the equivalent of multiple flows in OVS. Varying variety 
of traffic including layer 2 and layer 3 traffic.
    
    
    
    
    MK: Yes. Many flows is must.
    
    4 Multiple IF's (Guest or Host or Both?)
    
    
    
    
    MK: What do you mean by multiple IF’s (interfaces)? With multiple VMs we 
surely have multiple vhost interfaces, minimum 2 vhost interfaces per VM. What 
matters IMV is the ratio and speed between: i) physical interfaces 10GE, 40GE; 
and ii) vhost interfaces with
     slow or fast VMs. I suggest we work few scenarios covering both i) and 
ii), and number of VMs, based on use cases folks have.
    

Most deployments will have a limited number of physical interfaces per compute 
node. One interface or 2 bonded interfaces per compute node.
The number of vhost interfaces is going to be an order of magnitude larger. 
With the example of 10 VMs and 2 networks per VM, that’s 20 vhost interfaces 
for 1 phys interface.
Of course there might be special configs with very different requirements 
(large oversubscription of VMs, or larger number of phys interfaces) but I 
think the 10 x PVP with 20 vhost interfaces and 1 phys interface use case looks 
like a good starting point.
    

    I am copying this to Franck. I am not sure whether he was asking for 
multiple PHY PMDs or more then 2 IFs per guest. I think that multiple guests 
with 2 IFs each should be a pretty good test to start with.
    
    
    OK. Any more feedback here from anybody?
    
    
    
    The following might not be doable by 17.01 and if not consider the 
following as a wish list for future:
    
    1 vxLan tunneled traffic
    
    
    
    
    MK: Do you mean VXLAN on the wire, VPP (running in host) does VXLAN tunnel 
termination (VTEP) into L2BD, and then L2 switching into VMs via vhost? If so, 
that’s the most common requirement I hear from folks e.g. OPNFV/FDS.
    
    
    
    I am not sure whether Franck was suggesting VTEP or whether he wanted encap 
and decap of L3 vxlan or whether he was asking for forwarding rules in guest 
and not just layer 2 MAC forwarding.
    
    
We need to cover the openstack vxlan overlay case: VTEP in the vswitch, 
everythying below the vswitch is VxLAN traffic, everything above the VTEP is 
straight L2 forwarding to the vhost interfaces.

    
    
    OK. Any more feedback here from anybody?
    
    
    2 VPP in guest with layer 2 and layer 3 vRouted traffic.
    

    MK: What do you mean here? VPP in guest with dpdk-virtio (instead of 
testpmd), and VPP in host with vhost ?    
    
    Yes, VPP in host. I think some folks are looking for a test that 
approximates a routing VNF but I am forwarding this for Franck's comment
    
    
    OK. Any more feedback here from anybody?
    
    3 Additional Overlay/Underlay: MPLS

    MK: MPLSoEthernet?, MPLSoGRE? VPNv4, VPNv6? Else?
    MK: L2oLISP, IPv4oLISP, IPv6oLISP.
    
    
    
    MPLSoEthernet
    
    
    But what VPP configuration - just MPLS label switching (LSR), or VPN edge 
(LER aka PE) ?
    
    
    
    I don't have the answer. Maybe Franck or Anita may want to comment.
    
    In general, the context for my comment is wrt to perf testing of VPP vs 
DPDK/OVS and other vSwitches/data planes. Current testing is optimized for 
multiple layer 2 flows. If we are passing and forwarding tunneled or encapped 
traffic in the VM, even if we don't
     terminate a VTEP, we are closer to real world VNF use cases, and may 
provide a better basis perf comparisons for Telcos and similar users.
    


On the OpenStack front, we need to stay focused first on L2 switching 
performance in the vswitch between physical interfaces, potentially virtual 
interfaces such as vxlan tunnels and vhost interfaces.

Thanks

   Alec



_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to