Some inputs from my side with MK.

On 8 Nov 2016, at 21:25, Thomas F Herbert 
<therb...@redhat.com<mailto:therb...@redhat.com>> wrote:


All:

Soliciting opinions from people as to vhost-user testing scenarios and guest 
modes in fd.io<http://fd.io> CSIT testing of VPP - vhost-user.

I will forward to this mailing list as well as summarize any additional 
feedback.

I asked some people that happen to be here at OVSCON as well as some Red Hat 
and Intel people. I am also including some people that are involved in upstream 
vhost-user work in DPDK.

So far, I have the following feedback with an attempt to condense feedback and 
to keep the list small. If I left out anything, let me know.

In addition to the PVP tests done now with small packets.

Testpmd in guest is OK for now.

MK: vhost should be tested also with IRQ drivers, not only PMD, e.g. Linux 
guest with kernel IP routing. It’s done today in CSIT functional tests in VIRL 
(no testpmd there).

1 Add multiple VMs (How many?)

MK: For performance test, we should aim for a box-full, so for 1vCPU VMs fill 
up all cores :)

2 Both multi-queue and single-queue

MK: vhost single-queue for sure. vhost multi-queue seems to matter only to huge 
VMs that generate lots of traffic and coming close to overloading worker thread 
dealing with it.

3 Tests that cause the equivalent of multiple flows in OVS. Varying variety of 
traffic including layer 2 and layer 3 traffic.

MK: Yes. Many flows is must.

4 Multiple IF's (Guest or Host or Both?)

MK: What do you mean by multiple IF’s (interfaces)? With multiple VMs we surely 
have multiple vhost interfaces, minimum 2 vhost interfaces per VM. What matters 
IMV is the ratio and speed between: i) physical interfaces 10GE, 40GE; and ii) 
vhost interfaces with slow or fast VMs. I suggest we work few scenarios 
covering both i) and ii), and number of VMs, based on use cases folks have.

The following might not be doable by 17.01 and if not consider the following as 
a wish list for future:

1 vxLan tunneled traffic

MK: Do you mean VXLAN on the wire, VPP (running in host) does VXLAN tunnel 
termination (VTEP) into L2BD, and then L2 switching into VMs via vhost? If so, 
that’s the most common requirement I hear from folks e.g. OPNFV/FDS.

2 VPP in guest with layer 2 and layer 3 vRouted traffic.

MK: What do you mean here? VPP in guest with dpdk-virtio (instead of testpmd), 
and VPP in host with vhost ?

3 Additional Overlay/Underlay: MPLS

MK: MPLSoEthernet?, MPLSoGRE? VPNv4, VPNv6? Else?
MK: L2oLISP, IPv4oLISP, IPv6oLISP.

-Maciek

--TFH
--
Thomas F Herbert
SDN Group
Office of Technology
Red Hat
_______________________________________________
csit-dev mailing list
csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>
https://lists.fd.io/mailman/listinfo/csit-dev

_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to