For visibility, updated RH 2544 testing results with 16.12.rc0

This is PVP testing and shows up vhost-user perf compared with OVS/DPDK

Once we have vhost-user merged and tested in CSIT I would expect to see significant improvement.

--TFH

On 11/03/2016 02:51 PM, Karl Rister wrote:
Hi All

Below are the results I previously shared for an OVS vs. VPP comparison.
  Included are some new VPP results using a newer version.  There are
also some single core results for both OVS and VPP.

New VPP version: 16.12-rc0~247_g9c2964c~b1272

2 Core Results (4 host PMD):

Mergeable Buffers Disabled:
  - OVS:
    - 0.002% Loss: 11.5216 Mpps bidirectional (5.7608 Mpps each way)
    - 0% Loss: 11.5216 Mpps bidirectional (5.7608 Mpps each way)
  - VPP (16.09):
    - 0.002% Loss: 7.5537 Mpps bidirectional (3.7769 Mpps each way)
    - 0% Loss: 5.2971 Mpps bidirectional (2.6486 Mpps each way)
  - VPP (16.12-rc0~247_g9c2964c~b1272):
    - 0.002% Loss: 7.8134 Mpps bidirectioanl (3.9067 Mpps each way)
    - 0% Loss: 6.0250 Mpps bidirectional (3.0125 Mpps each way)

Mergeable Buffers Enabled:
  - OVS:
    - 0.002% Loss: 6.5626 Mpps bidirectional (3.2813 Mpps each way)
    - 0% Loss: 6.3622 Mpps bidirectional (3.1811 Mpps each way)
  - VPP (16.09):
    - 0.002% Loss: 7.8134 Mpps bidirectional (3.9067 Mpps each way)
    - 0% Loss: 5.1029 Mpps bidirectional (2.5515 Mpps each way)
  - VPP (16.12-rc0~247_g9c2964c~b1272):
    - 0.002% Loss: 7.8134 Mpps bidirectional (3.9067 Mpps each way)
    - 0% Loss: 6.0250 Mpps bidirectional (3.0125 Mpps each way)

Mergeable Buffers Disabled + VPP no-multi-seg:
  - VPP (16.09):
    - 0.002% Loss: 8.0654 Mpps bidirectional (4.0327 Mpps each way)
    - 0% Loss: 5.6442 Mpps bidirectional (2.8221 Mpps each way)
  - VPP (16.12-rc0~247_g9c2964c~b1272):
    - 0.002% Loss: 8.4184 Mpps bidirectional (4.2092 Mpps each way)
    - 0% Los: 6.2666 Mpps bidirectional (3.1333 Mpps each way)

1 Core Results (2 host PMD):

Mergeable Buffers Disabled:
  - OVS:
    - 0.002% Loss: 5.9533 Mpps bidirectional (2.9766 Mpps each way)
    - 0% Loss: 5.7480 Mpps bidirectional (2.8740 Mpps each way)
  - VPP (16.12-rc0~247_g9c2964c~b1272):
    - 0.002% Loss: 3.9503 Mpps bidirectioanl (1.9751 Mpps each way)
    - 0% Loss: 3.4587 Mpps bidirectional (1.7294 Mpps each way)

Mergeable Buffers Enabled:
  - OVS:
    - 0.002% Loss: 4.4179 Mpps bidirectional (2.2089 Mpps each way)
    - 0% Loss: 4.4179 Mpps bidirectional (2.2089 Mpps each way)
  - VPP (16.12-rc0~247_g9c2964c~b1272):
    - 0.002% Loss: 3.9503 Mpps bidirectional (1.9751 Mpps each way)
    - 0% Loss: 3.6346 Mpps bidirectional (1.8173 Mpps each way)

Mergeable Buffers Disabled + VPP no-multi-seg:
  - VPP (16.12-rc0~247_g9c2964c~b1272):
    - 0.002% Loss: 4.0992 Mpps bidirectional (2.0496 Mpps each way)
    - 0% Los: 3.4587 Mpps bidirectional (1.7294 Mpps each way)

I've added the data to the shared spreadsheet and changed some
formatting slightly to distinguish between the different results.

https://docs.google.com/a/redhat.com/spreadsheets/d/1K6zDVgZYPJL-7EsIYMBIZCn65NAkVL_GtkBrAnAdXao/edit?usp=sharing

Karl

On 10/21/2016 03:06 PM, Karl Rister wrote:
Hi All

Below are updated performance results for OVS and VPP on our new
Broadwell testbed.  I've tried to include all the relevant details, let
me know if I have forgotten anything of interest to you.

Karl



Processor: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Broadwell)
Environment: RT + Hyperthreading (see [1] for details on KVM-RT)
Kernel: 3.10.0-510.rt56.415.el7.x86_64
Tuned: 2.7.1-3.el7

/proc/cmdline:
<...> default_hugepagesz=1G iommu=pt intel_iommu=on isolcpus=4-55
nohz=on nohz_full=4-55 rcu_nocbs=4-55 intel_pstate=disable nosoftlockup

Versions:
- OVS: openvswitch-2.5.0-10.git20160727.el7fdb + BZ fix [2]
- VPP: v16.09

NUMA node 0 CPU sibling pairs:
- (0,28)(2,30)(4,32)(6,34)(8,36)(10,38)(12,40)(14,42)(16,44)(18,46)
   (20,48)(22,50)(24,52)(26,54)

Host PMD Assignment:
- dpdk0 = CPU 6
- vhost-user1 = CPU 34
- dpdk1 = CPU 8
- vhost-user2 = CPU 36

Guest CPU Assignment:
- Emulator = CPU 20
- VCPU 0 (Housekeeping) = CPU 22
- VCPU 1 (PMD) = CPU 24
- VCPU 2 (PMD) = CPU 26

Configuration Details:
- OVS: custom OpenFlow rules direct packets similarly to VPP L2 xconnect
- VPP: L2 xconnect
- DPDK v16.07.0 testpmd in guest
- SCHED_FIFO priority 95 applied to all PMD threads (OVS/VPP/testpmd)
- SCHED_FIFO priority 1 applied to Guest VCPUs used for PMDs

Test Parameters:
- 64B packet size
- L2 forwarding test
   - All tests are bidirectional PVP (physical<->virtual<->physical)
   - Packets enter on a NIC port and are forwarded to the guest
   - Inside the guests, received packets are sent out the opposite
     direction
- Binary search starting at line rate (14.88 Mpps each way)
- 10 Minute Search Duration
- 2 Hour Validation Duration follows passing run for 10 Minute Search
   - If validation fails, search continues

Mergeable Buffers Disabled:
- OVS:
   - 0.002% Loss: 11.5216 Mpps bidirectional (5.7608 Mpps each way)
   - 0% Loss: 11.5216 Mpps bidirectional (5.7608 Mpps each way)
- VPP:
   - 0.002% Loss: 7.5537 Mpps bidirectional (3.7769 Mpps each way)Andre
Fredette <afred...@redhat.com>
   - 0% Loss: 5.2971 Mpps bidirectional (2.6486 Mpps each way)

Mergeable Buffers Enabled:
- OVS:
   - 0.002% Loss: 6.5626 Mpps bidirectional (3.2813 Mpps each way)
   - 0% Loss: 6.3622 Mpps bidirectional (3.1811 Mpps each way)
- VPP:
   - 0.002% Loss: 7.8134 Mpps bidirectional (3.9067 Mpps each way)
   - 0% Loss: 5.1029 Mpps bidirectional (2.5515 Mpps each way)

Mergeable Buffers Disabled + VPP no-multi-seg:
- VPP:
   - 0.002% Loss: 8.0654 Mpps bidirectional (4.0327 Mpps each way)
   - 0% Loss: 5.6442 Mpps bidirectional (2.8221 Mpps each way)

The details of these results (including latency metrics and links to the
raw data) are available at [3].

[1]: https://virt-wiki.lab.eng.brq.redhat.com/KVM/RealTime
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1344787
[3]:
https://docs.google.com/a/redhat.com/spreadsheets/d/1K6zDVgZYPJL-7EsIYMBIZCn65NAkVL_GtkBrAnAdXao/edit?usp=sharing



--
*Thomas F Herbert*
SDN Group
Office of Technology
*Red Hat*
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to