Thanks for the quick reply, Maciek.

Just to be double sure, your jumbo frame results should be ~1.3Mpps/16 _G_bps,
not _H_, right?  I assume you aren't using humble-burst measurements.

I looked through some of the links, and understand that generally performance
should scale with frequency/number of threads, though mileage may vary when 
dealing with VMs.  With this in mind, are the numbers you're quoting dealing 
with
just a single worker thread on that single core?

Thanks,
Eric

On Thu, Apr 13, 2017 at 11:52:26AM +0000, Maciek Konstantynowicz (mkonstan) 
wrote:
> +csit-dev
> 
> Eric,
> 
> The way we test vpp vhostuser in CSIT is by generating packet streams
> from external traffic generator per VM topology shown at this link:
> 
>     
> https://docs.fd.io/csit/rls1701/report/vpp_performance_tests/overview.html#tested-physical-topologies
> 
> VM vhostuser testing methodology is described here:
> 
>     
> https://docs.fd.io/csit/rls1701/report/vpp_performance_tests/overview.html#methodology-kvm-vm-vhost
>     https://wiki.fd.io/view/CSIT/csit-perf-env-tuning-ubuntu1604
> 
> Reported measured non-drop-rate throughput, on machines in FD.io labs,
> is at the level of 3Mpps for 64B and and 1.3Mpps/16Hbps for 1518B
> Ethernet frames, for VPP data plane thread (worker thread) running on a
> single physical core.
> 
> Note that this is an external throughput i.e. from traffic generator
> perspective. To get the actual VPP pkt forwarding capacity, numbers need
> to be multiplied by two, as each packet is switched by VPP twice in each
> direction: NIC-to-VM and VM-to-NIC.
> 
> Here pointers to throughput graphs and detailed results listings:
> 
>     
> https://docs.fd.io/csit/rls1701/report/vpp_performance_tests/packet_throughput_graphs/vm_vhost.html#ndr-throughput
>     
> https://docs.fd.io/csit/rls1701/report/vpp_performance_tests/packet_throughput_graphs/vm_vhost.html#pdr-throughput
>     
> https://docs.fd.io/csit/rls1701/report/detailed_test_results/vpp_performance_results/vpp_performance_results.html#vm-vhost-connections
> 
> In case relevant, FD.io CSIT performance test environment is described here:
> 
>     
> https://docs.fd.io/csit/rls1701/report/vpp_performance_tests/test_environment.html
> 
> For VM tests, Qemu is configured with tso4 and tso6 off:
> 
>     https://git.fd.io/csit/tree/resources/libraries/python/QemuUtils.py
> 
> hth
> 
> -Maciek
> 
> > On 13 Apr 2017, at 05:26, Ernst, Eric <eric.er...@intel.com> wrote:
> > 
> > Hey,
> > 
> > I've been reading through the various use case examples at wiki.fd.io, and
> > after reading through 
> > https://wiki.fd.io/view/VPP/Tutorial_Routing_and_Switching,
> > I came up with a recipe for testing:
> > 
> >   VM <- vpp vhost-user - l2 bridge domain - vpp vhost-user -> VM
> > 
> > For reference, I describe the setup at:
> > https://gist.github.com/egernst/5982ae6f0590cd83330faafacc3fd545
> > 
> > After verifying connectivity, I used iperf3 to get baseline bandwith 
> > numbers.
> > 
> > I am seeing on the order of ~45 Mbits/sec in this configuration, using 
> > default
> > VPP config options, on a DP Xeon system running on the same socket.  I was 
> > surprised
> > by this, so ran a similar test using vhost/veth, connecting two namespaces 
> > also
> > through a l2 bridge domain.  In this case I saw ~2 Gbits/sec.  Better, but 
> > still surprising.
> > 
> > Is there something (obvious) wrong in my base setup (ie, using a l2 
> > bridge-domain?)?
> > I think it'd be useful to have a "best" example on the site for VM 
> > connectivity,
> > but first want to make sure I'm down the right path before 
> > submitting/contributing.
> > 
> > While I don't think it'd account for this amount of degradataion, I'm 
> > curious if
> > there is TSO support in VPP?
> > 
> > Finally, @ ONS Ed gave a great overview of VPP, including sample benchmark 
> > numbers.
> > Are there similar results for vhost-user enabled VMs?
> > 
> > Thanks,
> > Eric
> > _______________________________________________
> > vpp-dev mailing list
> > vpp-dev@lists.fd.io
> > https://lists.fd.io/mailman/listinfo/vpp-dev
> 
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to