Great summary slides Karl, I have a few more questions on the slides.

·         Did you use OSP10/OSPD/ML2 to deploy your testpmd VM/configure the 
vswitch or is it direct launch using libvirt and direct config of the 
vswitches? (this is a bit related to Maciek’s question on the exact interface 
configs in the vswitch)

·         Unclear if all the charts results were measured using 4 phys cores 
(no HT) or 2 phys cores (4 threads with HT)

·         How do you report your pps? ;-) Are those

o   vswitch centric (how many packets the vswitch forwards per second coming 
from traffic gen and from VMs)

o   or traffic gen centric aggregated TX (how many pps are sent by the traffic 
gen on both interfaces)

o   or traffic gen centric aggregated TX+RX (how many pps are sent and received 
by the traffic gen on both interfaces)


·         From the numbers shown, it looks like it is the first or the last

·         Unidirectional or symmetric bi-directional traffic?

·         BIOS Turbo boost enabled or disabled?

·         How many vcpus running the testpmd VM?

·         How do you range the combinations in your 1M flows src/dest MAC? I’m 
not aware about any real NFV cloud deployment/VNF that handles that type of 
flow pattern, do you?

Thanks

  Alec


From: <vpp-dev-boun...@lists.fd.io> on behalf of "Maciek Konstantynowicz 
(mkonstan)" <mkons...@cisco.com>
Date: Wednesday, February 15, 2017 at 1:28 PM
To: Thomas F Herbert <therb...@redhat.com>
Cc: Andrew Theurer <atheu...@redhat.com>, Douglas Shakshober 
<dsh...@redhat.com>, "csit-...@lists.fd.io" <csit-...@lists.fd.io>, vpp-dev 
<vpp-dev@lists.fd.io>, Karl Rister <kris...@redhat.com>
Subject: Re: [vpp-dev] Interesting perf test results from Red Hat's test team

Thomas, many thanks for sending this.

Few comments and questions after reading the slides:

1. s3 clarification - host and data plane thread setup - vswitch pmd (data 
plane) thread placement
    a. "1PMD/core (4 core)” - HT (SMT) disabled, 4 phy cores used for vswitch, 
each with data plane thread.
    b. “2PMD/core (2 core)” - HT (SMT) enabled, 2 phy cores, 4 logical cores 
used for vswitch, each with data plane thread.
    c. in both cases each data plane thread handling a single interface - 2* 
physical, 2* vhost => 4 threads, all busy.
    d. in both cases frames are dropped by vswitch or in vring due to vswitch 
not keeping up - IOW testpmd in kvm guest is not DUT.
2. s3 question - vswitch setup - it is unclear what is the forwarding mode of 
each vswitch, as only srcIp changed in flows
    a. flow or MAC learning mode?
    b. port to port crossconnect?
3. s3 comment - host and data plane thread setup
    a. “2PMD/core (2 core)” case - thread placement may yield different results
        - physical interface threads as siblings vs.
        - physical and virtual interface threads as siblings.
    b. "1PMD/core (4 core)” - one would expect these to be much higher than 
“2PMD/core (2 core)”
        - speculation: possibly due to "instruction load" imbalance between 
threads.
        - two types of thread with different "instruction load": phy->vhost vs. 
vhost->phy
        - "instruction load" = instr/pkt, instr/cycle (IPC efficiency).
4. s4 comment - results look as expected for vpp
5. s5 question - unclear why throughput doubled
    a. e.g. for vpp from "11.16 Mpps" to "22.03 Mpps"
    b. if only queues increased, and cpu resources did not, or have they?
6. s6 question - similar to point 5. - unclear cpu and thread reasources.
7. s7 comment - anomaly for 3q (virtio multi-queue) for (srcMAc,dstMAC)
    a. could be due to flow hashing inefficiency.

-Maciek

On 15 Feb 2017, at 17:34, Thomas F Herbert 
<therb...@redhat.com<mailto:therb...@redhat.com>> wrote:

Here are test results on VPP 17.01 compared with OVS/DPDK 2.6/1611 performed by 
Karl Rister of Red Hat.
This is PVP testing with 1, 2 and 3 queues. It is an interesting comparison 
with the CSIT results. Of particular interest is the drop off on the 3 queue 
results.
--TFH

--
Thomas F Herbert
SDN Group
Office of Technology
Red Hat
<vpp-17.01_vs_ovs-2.6.pdf>_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
https://lists.fd.io/mailman/listinfo/vpp-dev

_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to