Hi,

Suggest we discuss it on the next CSIT project call (today?) if we get all the 
interested or arrange a separate TWS call.
General note:
Within CSIT we’re focusing on verifying FD.io<http://fd.io> VPP capabilities 
and properties incl. integration with operating environment.
More complete solution testing including interop with VNFs is out of scope, and 
per Alec’s note we should consider driving this into project where this belongs 
e.g. OPNFV.

Few more comments from my side inline [mk] ..

On 28 Feb 2017, at 21:04, Alec Hothan (ahothan) 
<ahot...@cisco.com<mailto:ahot...@cisco.com>> wrote:


Comments inline…


From: "Liew, Irene" <irene.l...@intel.com<mailto:irene.l...@intel.com>>
Date: Tuesday, February 28, 2017 at 10:25 AM
To: Thomas F Herbert <therb...@redhat.com<mailto:therb...@redhat.com>>, 
"csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>, "Maciek Konstantynowicz 
(mkonstan)" <mkons...@cisco.com<mailto:mkons...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>, "Pierre Pfister 
(ppfister)" <ppfis...@cisco.com<mailto:ppfis...@cisco.com>>, "Alec Hothan 
(ahothan)" <ahot...@cisco.com<mailto:ahot...@cisco.com>>, Karl Rister 
<kris...@redhat.com<mailto:kris...@redhat.com>>, Douglas Shakshober 
<dsh...@redhat.com<mailto:dsh...@redhat.com>>, Andrew Theurer 
<atheu...@redhat.com<mailto:atheu...@redhat.com>>, "Liew, Irene" 
<irene.l...@intel.com<mailto:irene.l...@intel.com>>
Subject: RE: fd.io<http://fd.io/> CSIT vhost-user test scenario implementation 
priorities

Here are my thoughts and comments on the topologies/test and workloads for CSIT 
vhost-user test scenarios. Pls refer to my comments inline below.

-----Original Message-----
From: Thomas F Herbert [mailto:therb...@redhat.com]
Sent: Monday, February 27, 2017 10:04 AM
To: csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>; Maciek Konstantynowicz 
(mkonstan) <mkons...@cisco.com<mailto:mkons...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>; Liew, Irene 
<irene.l...@intel.com<mailto:irene.l...@intel.com>>; Pierre Pfister (ppfister) 
<ppfis...@cisco.com<mailto:ppfis...@cisco.com>>; Alec Hothan (ahothan) 
<ahot...@cisco.com<mailto:ahot...@cisco.com>>; Karl Rister 
<kris...@redhat.com<mailto:kris...@redhat.com>>; Douglas Shakshober 
<dsh...@redhat.com<mailto:dsh...@redhat.com>>; Andrew Theurer 
<atheu...@redhat.com<mailto:atheu...@redhat.com>>
Subject: fd.io<http://fd.io/> CSIT vhost-user test scenario implementation 
priorities

Please weigh in:

We are starting to plan fd.io<http://fd.io/> CSIT Vhost-user test scenario 
priorities for implementation in 17.04 and  in 17.07 CSIT releases.

Vhost-user performance is critical for VNF acceptance in potential use cases 
for VPP/fd.io<http://fd.io/> adaption.

We had previous email thread here:
https://lists.fd.io/pipermail/csit-dev/2016-November/001192.html along with a 
TWS https://wiki.fd.io/view/TWS meetings on 12/02/16 and 12/07/16
  summarized in this wiki:
https://wiki.fd.io/view/CSIT/vhostuser_test_scenarios

Topologies and tests

Current in 17.01:

10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm
10ge2p1x520-dot1q-l2xcbase-eth-2vhost-1vm
10ge2p1x520-ethip4-ip4base-eth-2vhost-1vm
10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm
10ge2p1x520-eth-l2bdbasemaclrn-eth-2vhost-1vm
10ge2p1x520-eth-l2xcbase-eth-2vhost-1vm
10ge2p1x710-eth-l2bdbasemaclrn-eth-2vhost-1vm
40ge2p1xl710-eth-l2bdbasemaclrn-eth-2vhost-1vm

single and multi-queue

[mk] Agree, multi-queue for multi-thread setups.


testing of pmd baseline

[mk] That’s already in csit for x520 NIC (reaching NIC pps limit), adding xl710 
is having hiccups.


Proposed in links above
     1p1nic-dot1q-l2bdbase-eth-2vhost-1vm
     1p1nic-ethip4vxlan-l2bdbase-eth-2vhost-1vm
     1p1nic-dot1q-l2bdbase-eth-4vhost-2vm-chain
     1p1nic-ethip4vxlan-l2bdbase-eth-4vhost-2vm-chain
     1p1nic-dot1q-l2bdbase-eth-2vhost-1vm-chain-2nodes
     1p1nic-ethip4vxlan-l2bdbase-eth-2vhost-1vm-2nodes


[mk] In the current LF setup, unless we use a two-node topology, we can’t do 
1p1nic perf tests.

[Irene] For the baseline testing on vhost-user, I would recommend to run core 
scaling from 1 core - max cores for 1 VM Phy-VM-Phy and 2 VMs PVVP. I know the 
current VPP v17.01  did not have the support to manually assign the vhost-user 
ports RXQ to specific cores to ensure load balancing across the cores. And from 
our experience in the lab, when I ran 3-core of work threads in 4vhost-2vm PVVP 
configuration, I observed the ports were unevenly distributed across 3 worker 
threads and VPP vNet suffered in performance scalability. If the manual RXQ 
assignment for vhost-user port feature will be made available in the next 17.04 
or 17.07 release, I strongly propose to include the core scaling of worker 
threads in order to evaluate the vhost-user RXQ core assignment feature. For 
example, we can pick 1 test case of 2 vhost-1vm  and run with configuration of 
1-core, 2-core and 4-core of worker threads. We then pick 1 test case of 
4vhost-2vm-chain and run with configuration of 1-core, 2-core, 3-core, 4-core 
and 6-core of worker threads.

To limit the number of test I suggest we use 1,2,4 physical cores. I don’t 
think there will be many deployments with 6 or more physical cores for the 
vswitch (but I’m only talking about openstack-NFV deployments).
One interesting variation is to test with hyper-thread and without: no 
hyper-thread = 1,2,4 VPP worker threads (mapped on as many full phys cores), 
with hyper-thread = 2,4,8 worker threads using sibling native threads) and 
check if we can find the same kind of linearity as Karl Rister (if you missed 
the other email thread.

[mk] Agree re simplifying multi-core setup to 1,2,4 physical cores without and 
with SMT/HT.
[mk] The concern will be the number of combinations...





Proposed topologies for OpenStack from links above:

2p1nic-dot1q-l2bdscale-<n>flows-eth-<m>vhost-<o>vm-chain
2p1nic-ethip4vxlan-l2bdscale-<n>flows-eth-<m>vhost-<o>vm-chain

New scenarios Proposed:

Primary Overlay vxlan and VTEP

     2p1nic-ethip4vxlan-l2bdbase-eth-2vhost-1vm
     2p1nic-ethip4vxlan-l2bdbase-eth-20vhost-10vm

[Irene] There is a trend in the industry using IPv6 over VXLAN. Shall we 
include IPv6 VXLAN scenario too?

Do you mean IPv6 inside VxLAN tunnels or VxLAN tunnels using IPv6 UDP addresses?
The term ethip4vxlan means VxLAN tunnels using IPv4 UDP addresses.
I’m not sure many people are using IPv6 for the VxLAN overlay itself.



MPLS over Ethernet

Scaling Multiple VMs

     2p1nic-dot1q-l2bdbase-eth-20vhost-10vm

[mk] We would need to make sure that VM+testpmd are not becoming the DUT in 
this case :)



Workloads:

     VNF based on Linux relying kernel virtio driver - kernel linux bridge, 
kernel L3/IPv4 routing

     IPv4/v6 VPP vRouter

[Irene] For the VNF workloads, we need to brainstorm and include real workload 
applications to test to provide a better understanding in performance for real 
NFV/SDN deployment. Yes these workloads listed above would be a good baseline 
number. I suggest we should start to brainstorm and discuss for other real 
representative workload for the Telco/datacenter deployment which we can later 
incorporate into CSIT.
For example, some of the workloads that can be a good candidate are IPSec, 
Firewall, webserver SSL, etc.


Did you check with the NSB work by Intel NPG/DCG (what is being committed to 
OPNFV/Yardstick)? This looks a lot like what they want to do.

[mk] Ditto. Per my opening comment, I agree that the solution testing would 
belong into OPNFV.


Thanks

   Alec


Did I leave out anything?

[mk] Nope, thanks for driving it Thomas !

-Maciek

...
--
*Thomas F Herbert*
SDN Group
Office of Technology
*Red Hat*


_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to