Maciek, really thanks!

2016-11-30 12:36 GMT+08:00 Alec Hothan (ahothan) <ahot...@cisco.com>:

>
>
> Thanks for putting this together!
>
> A few preliminary general comments which we can discuss Wednesday.
>
>
>
> *2p1nic vs 1p1nic*:
>
> I know lots of vswitch benchmarks love pairing phys interfaces as it is
> easier to scale with more cores…
>
> But a lot of openstack deployments will come with 1 physical port for all
> the tenant traffic. In that case, looping traffic will come in through a
> vlan and go out to another vlan (vlan case) or come in through a vxlan
> tunnel and go out another vxlan tunnel (VxLAN overlay). The maximum
> throughput is 10G in and 10 G out, less wires on the compute nodes and less
> ports on the TOR.
>
>
>
> I’m not sure there are/will be many deployments with 2 independent
> physical ports for tenant traffic. At 30 nodes per rack that makes it 60
> ports per TOR just for the tenant traffic…. If you factor in bonding, that
> would make it 4 wires per compute node instead of 2.
>
> For those deployments that really need more than a 10G link, they might
> just use 1 40G link (or 2 bonded links) rather than 2 individual 10G links.
>
> Neutron supports multiple interfaces but it looks harder to manage (I
> guess you need to split your vlan range in 2 but in the case of vxlan
> overlay I’m not sure how that can be managed.
>
>
>
> I’d like to hear from others whether we should focus on 1p1nic first. It
> just seems to me this might be more representative of real NFV deployments
> than 2p1nic.
>
>
>
>
>
> *Test topologies of interest for OpenStack*:
>
>
>
> Could we mark all those test topologies that are applicable to OpenStack?
> Or more generally describe the use case for each test topology.
>
> I don’t think OpenStack will use any l2xc in vpp.
>
>
>
> OpenStack ML2/VPP/VLAN will use dot1q-l2bdbase-eth-2vhost-1vm
>
> OpenStack VPP/VxLAN will use ethip4vxlan-l2bdbase-eth-2vhost-1vm
>
>
>
> *VM images*:
>
>
>
> I think vsperf is using testpmd. I know some other perf teams use l2fwd.
>
> In our team, we use VPP l2 x-connect in the VM (what you call vswitch VPP)
> and evaluating testpmd and they don’t seem to differ to much in results.
> Seems like testpmd is the way to go for L2…
>
>
>
> Would also be good to have the VM image creation scripted – so that
> anybody can recreate them from scratch. We use DIB (openstack disk image
> builder) for creating our VM images but any other scripted solution should
> work.
>
> For easier reuse inside openstack, the config of the VM instance must be
> as automatic as possible. That is easy for the L2 case (just cross connect
> between the 2 virtual interfaces). For the L3 case, the L3 config should be
> done through config drive (not SSH).
>
>
>
>
>
> That’s it for now,
>
>
>
>   Alec
>
>
>
>
>
>
>
>
>
> *From: *"Maciek Konstantynowicz (mkonstan)" <mkons...@cisco.com>
> *Date: *Tuesday, November 29, 2016 at 6:27 PM
> *To: *Thomas F Herbert <therb...@redhat.com>, vpp-dev <vpp-dev@lists.fd.io>,
> "csit-...@lists.fd.io" <csit-...@lists.fd.io>, "Pierre Pfister
> (ppfister)" <ppfis...@cisco.com>, Andrew Theurer <atheu...@redhat.com>,
> Douglas Shakshober <dsh...@redhat.com>, Rashid Khan <rk...@redhat.com>,
> Karl Rister <kris...@redhat.com>, Irene Liew <irene.l...@intel.com>,
> "Alec Hothan (ahothan)" <ahot...@cisco.com>, Damon Wang <
> damon.dev...@gmail.com>
> *Subject: *Re: [vpp-dev] vHost user test scenarios for CSIT
>
>
>
> All,
>
>
>
> Here is the first draft:
>
>     https://wiki.fd.io/view/CSIT/vhostuser_test_scenarios
>
>
>
> I did my best to capture all inputs as per this thread. But it’s hardly
> readable yet - requires more TLC :)
>
> See what you think - feel free to add/edit things directly on FD.io
> <http://fd.io> wiki page.
>
>
>
> Suggest to discuss next steps on csit weekly call tomorrow, details here:
>
> https://wiki.fd.io/view/CSIT/Meeting
>
>
> -Maciek
>
>
>
> On 28 Nov 2016, at 07:37, Thomas F Herbert <therb...@redhat.com> wrote:
>
>
>
> All,
>
> At last week's CSIT meeting, Maciek (mkons...@cisco.com) offered to
> compile a summary suggestions on this mailing list.
>
>
>
> On 11/22/2016 11:34 AM, Pierre Pfister (ppfister) wrote:
>
> Hello Thomas,
>
>
>
> Sorry I haven't reached out faster, I was travelling.
>
>
>
> Please have a look at vppsb/vhost-test
>
> It includes a standalone script which provides VPP and VM configuration
> for PVP tests.
>
> - Runs testpmd in the VM
>
> - Supports various CPU configuration for VPP
>
> - Can run with or without gdb, debug or release
>
>
>
> Not committed yet:
>
> - Supports VM restart
>
> - Support for VPP restart
>
> - Support for multiple additional (dead) vhost interface
>
>
>
> I did that outside of the context of CSIT so people can:
>
> - Look at it and see what are the optimisations that are used
>
> - Use it without CSIT
>
>
>
> I will keep using and improving it because I use it for my own development
> and testing purposes.
>
>
>
> Rest of this inline.
>
>
>
> Le 8 nov. 2016 à 22:25, Thomas F Herbert <therb...@redhat.com> a écrit :
>
>
>
> All:
>
> Soliciting opinions from people as to vhost-user testing scenarios and
> guest modes in fd.io CSIT testing of VPP - vhost-user.
>
> I will forward to this mailing list as well as summarize any additional
> feedback.
>
> I asked some people that happen to be here at OVSCON as well as some Red
> Hat and Intel people. I am also including some people that are involved in
> upstream vhost-user work in DPDK.
>
> So far, I have the following feedback with an attempt to condense feedback
> and to keep the list small. If I left out anything, let me know.
>
> In addition to the PVP tests done now with small packets.
>
> Testpmd in guest is OK for now.
>
> 1 Add multiple VMs (How many?)
>
> Makes sense to me. 2 is enough (4 would be good number).
>
> 2 Both multi-queue and single-queue
>
> Yes. Ideally, 1-2-4 queues.
>
> With different number of workers (0 workers, i.e. single VPP thread, 1
> worker, queues*2 workers).
>
> 3 Tests that cause the equivalent of multiple flows in OVS. Varying
> variety of traffic including layer 2 and layer 3 traffic.
>
> Yes. Should test with L2 and L3.
>
> 4 Multiple IF's (Guest or Host or Both?)
>
> Possibly.
>
> But more importantly, I think, we need to have VM restart and interface
> restart (delete - create).
>
> OpenStack integration generates a significant amount of delete-recreate of
> vhost interface.
>
> The following might not be doable by 17.01 and if not consider the
> following as a wish list for future:
>
> 1 vxLan tunneled traffic
>
> 2 VPP in guest with layer 2 and layer 3 vRouted traffic.
>
> 3 Additional Overlay/Underlay: MPLS
>
> --TFH
>
> --
> *Thomas F Herbert*
> SDN Group
> Office of Technology
> *Red Hat*
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
>
>
>
> --
> *Thomas F Herbert*
> SDN Group
> Office of Technology
> *Red Hat*
>
>
>
>
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to