Pierre,

Do you have a ticket requesting an update of the Jenkin's qemu so we can
get your patch unblocked?

Ed

On Tue, Oct 25, 2016 at 12:14 AM, Pierre Pfister (ppfister) <
ppfis...@cisco.com> wrote:

> Hello,
>
> For now the multi-queue patch is still stuck in gerrit because jenkin's
> qemu is using an old buggy version...
> I made some measurements on vhost: FD.io_mini-summit_916_
> Vhost_Performance_and_Optimization.pptx
> <https://wiki.fd.io/images/c/cc/FD.io_mini-summit_916_Vhost_Performance_and_Optimization.pptx>
>
> I see you try different combinations with and without mergeable
> descriptors.
> Do you do the same with 'indirect descriptors' ? They are supported by VPP
> since september or so.
> The issue with these zillions ways a buffer may be forwarded is that we
> only know what mode is enabled or disabled, but you never know exactly what
> is happening for real.
>
> Using indirect descriptors, I got VPP doing 0% loss 10Mpps (5Mpps each
> way). And the setup was stricter than yours as VPP had only 2 threads on
> the same core.
>
> You may also want to try 'chrt -r' on your working processes. This
> improves scheduling real-time properties.
>
> Thanks,
>
> - Pierre
>
>
>
>
> Le 25 oct. 2016 à 06:36, Jerome Tollet (jtollet) <jtol...@cisco.com> a
> écrit :
>
> + Pierre Pfister (ppfister) who ran a lot of benchmarks for VPP/vhostuser
>
> *De : *<vpp-dev-boun...@lists.fd.io> au nom de Thomas F Herbert <
> therb...@redhat.com>
> *Date : *lundi 24 octobre 2016 à 21:32
> *À : *"kris...@redhat.com" <kris...@redhat.com>, Andrew Theurer <
> atheu...@redhat.com>, Franck Baudin <fbau...@redhat.com>, Rashid Khan <
> rk...@redhat.com>, Bill Michalowski <bmich...@redhat.com>, Billy McFall <
> bmcf...@redhat.com>, Douglas Shakshober <dsh...@redhat.com>
> *Cc : *vpp-dev <vpp-dev@lists.fd.io>, "Damjan Marion (damarion)" <
> damar...@cisco.com>
> *Objet : *Re: [vpp-dev] updated ovs vs. vpp results for 0.002% and 0% loss
>
>
> +Maciek Konstantynowicz CSIT (mkonstan)
>
> +vpp-dev
>
> +Damjan Marion (damarion)
>
> Karl, Thanks!
>
> Your results seem close to consistent with VPP's CSIT testing for vhost
> for 16.09 but for broader visibility, I am including some people on the VPP
> team, Damjan who is working on multi-queue etc. (I see that there were some
> perf related patches merged in vhost that might help since 16.09.) and
> Maciek who works in the CSIT project and has done the testing of VPP.
>
> I want to open up the discussion WRT to the following:
>
> 1, Optimizing for maximum vhost perf with vpp including vhost-user
> multi-queue.
>
> 2. Comparision with CSIT results for vhost. Following are two links for
> CSIT
>
> 3. Statistics:
>
> 4. Tuning suggestions:
>
> Following are some CSIT results:
>
> compiled 16.09 results for vhost-user: https://wiki.fd.
> io/view/CSIT/VPP-16.09_Test_Report#VM_vhost-user_Throughput_Measurements
>
> Latest CSIT output from top of master, 16.12-rc0
>
> https://jenkins.fd.io/view/csit/job/csit-vpp-verify-perf-
> master-nightly-all/1085/console
>
> --Tom
> On 10/21/2016 04:06 PM, Karl Rister wrote:
>
> Hi All
>
>
>
> Below are updated performance results for OVS and VPP on our new
>
> Broadwell testbed.  I've tried to include all the relevant details, let
>
> me know if I have forgotten anything of interest to you.
>
>
>
> Karl
>
>
>
>
>
>
>
> Processor: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz (Broadwell)
>
> Environment: RT + Hyperthreading (see [1] for details on KVM-RT)
>
> Kernel: 3.10.0-510.rt56.415.el7.x86_64
>
> Tuned: 2.7.1-3.el7
>
>
>
> /proc/cmdline:
>
> <...> default_hugepagesz=1G iommu=pt intel_iommu=on isolcpus=4-55
>
> nohz=on nohz_full=4-55 rcu_nocbs=4-55 intel_pstate=disable nosoftlockup
>
>
>
> Versions:
>
> - OVS: openvswitch-2.5.0-10.git20160727.el7fdb + BZ fix [2]
>
> - VPP: v16.09
>
>
>
> NUMA node 0 CPU sibling pairs:
>
> - (0,28)(2,30)(4,32)(6,34)(8,36)(10,38)(12,40)(14,42)(16,44)(18,46)
>
>   (20,48)(22,50)(24,52)(26,54)
>
>
>
> Host PMD Assignment:
>
> - dpdk0 = CPU 6
>
> - vhost-user1 = CPU 34
>
> - dpdk1 = CPU 8
>
> - vhost-user2 = CPU 36
>
>
>
> Guest CPU Assignment:
>
> - Emulator = CPU 20
>
> - VCPU 0 (Housekeeping) = CPU 22
>
> - VCPU 1 (PMD) = CPU 24
>
> - VCPU 2 (PMD) = CPU 26
>
>
>
> Configuration Details:
>
> - OVS: custom OpenFlow rules direct packets similarly to VPP L2 xconnect
>
> - VPP: L2 xconnect
>
> - DPDK v16.07.0 testpmd in guest
>
> - SCHED_FIFO priority 95 applied to all PMD threads (OVS/VPP/testpmd)
>
> - SCHED_FIFO priority 1 applied to Guest VCPUs used for PMDs
>
>
>
> Test Parameters:
>
> - 64B packet size
>
> - L2 forwarding test
>
>   - All tests are bidirectional PVP (physical<->virtual<->physical)
>
>   - Packets enter on a NIC port and are forwarded to the guest
>
>   - Inside the guests, received packets are sent out the opposite
>
>     direction
>
> - Binary search starting at line rate (14.88 Mpps each way)
>
> - 10 Minute Search Duration
>
> - 2 Hour Validation Duration follows passing run for 10 Minute Search
>
>   - If validation fails, search continues
>
>
>
> Mergeable Buffers Disabled:
>
> - OVS:
>
>   - 0.002% Loss: 11.5216 Mpps bidirectional (5.7608 Mpps each way)
>
>   - 0% Loss: 11.5216 Mpps bidirectional (5.7608 Mpps each way)
>
> - VPP:
>
>   - 0.002% Loss: 7.5537 Mpps bidirectional (3.7769 Mpps each way)Andre
>
> Fredette <afred...@redhat.com> <afred...@redhat.com>
>
>   - 0% Loss: 5.2971 Mpps bidirectional (2.6486 Mpps each way)
>
>
>
> Mergeable Buffers Enabled:
>
> - OVS:
>
>   - 0.002% Loss: 6.5626 Mpps bidirectional (3.2813 Mpps each way)
>
>   - 0% Loss: 6.3622 Mpps bidirectional (3.1811 Mpps each way)
>
> - VPP:
>
>   - 0.002% Loss: 7.8134 Mpps bidirectional (3.9067 Mpps each way)
>
>   - 0% Loss: 5.1029 Mpps bidirectional (2.5515 Mpps each way)
>
>
>
> Mergeable Buffers Disabled + VPP no-multi-seg:
>
> - VPP:
>
>   - 0.002% Loss: 8.0654 Mpps bidirectional (4.0327 Mpps each way)
>
>   - 0% Loss: 5.6442 Mpps bidirectional (2.8221 Mpps each way)
>
>
>
> The details of these results (including latency metrics and links to the
>
> raw data) are available at [3].
>
>
>
> [1]: https://virt-wiki.lab.eng.brq.redhat.com/KVM/RealTime
>
> [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1344787
>
> [3]:
>
> https://docs.google.com/a/redhat.com/spreadsheets/d/1K6zDVgZYPJL-7EsIYMBIZCn65NAkVL_GtkBrAnAdXao/edit?usp=sharing
>
>
>
>
> --
> *Thomas F Herbert*
> SDN Group
> Office of Technology
> *Red Hat*
>
>
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
  • Re: [vpp-dev] upd... Thomas F Herbert
    • Re: [vpp-dev... Jerome Tollet (jtollet)
      • Re: [vpp... Pierre Pfister (ppfister)
        • Re: ... Thomas F Herbert
          • ... Karl Rister
            • ... Thomas F Herbert
          • ... Dave Wallace
          • ... Pierre Pfister (ppfister)
            • ... Karl Rister
        • Re: ... Edward Warnicke
          • ... Thomas F Herbert
            • ... Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco)
              • ... Pierre Pfister (ppfister)
                • ... Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco)
                • ... Maciek Konstantynowicz (mkonstan)
                • ... Edward Warnicke
                • ... Maciek Konstantynowicz (mkonstan)
                • ... Edward Warnicke
                • ... Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco)
                • ... Pierre Pfister (ppfister)

Reply via email to