Hi All,

Responses in line below as well.

Cheers,
Lincoln

On Fri, Dec 7, 2018 at 12:33 PM Alec via Lists.Opnfv.Org <ahothan=
[email protected]> wrote:

> Hi Trevor,
>
>
>
> Inline…
>
>
>
>
>
>
>
>
>
>
>
> *From: *<[email protected]> on behalf of Trevor Bramwell
> <[email protected]>
> *Date: *Thursday, December 6, 2018 at 4:47 PM
> *To: *"[email protected]" <
> [email protected]>
> *Subject: *[SUSPICIOUS] [opnfv-tech-discuss] Infra Evolution: Labs
>
>
>
> Hi all,
>
>
>
> The TSC tasked the Infra-WG to create a proposal for evolving the
>
> infrastructure of OPNFV, but given the low attendance to the Infra-WG
>
> calls I thought it best to bring this discussion to the mailing list so
>
> we can benefit from the wider community's experience and knowledge.
>
>
>
> To keep the discussion focused, I'd like to just look at Labs and how we
>
> can evolve the infrastructure there, and I'll follow up with thoughts on
>
> CI, code, and artifacts, after we've reached some consensus since there's a
>
> bit more interdependence with those components along with some
>
> complexity. I've collected some of the options here[1].
>
>
>
> What we need to find (and anyone feel free to jump in if I've
>
> misunderstood the ask) is a scalable solution for bringing up new PODs
>
> for CI that doesn't require us hosting or purchasing more dedicated
>
> hardware.
>
>
>
>
>
> Now of the top cloud providers: AWS, Google Compute, Azure, only AWS
>
> provides a hardware-as-a-service offering[2]. The others recently
>
> enabled nested virtualization on some instance types, which may work for
>
> us verifying virtual deployments (with performance trade-offs
>
> obviously), but don't help when we want to verify against networking
>
> hardware.
>
>
>
> That’s right.
>
>
>
> This is why I started the discussion with Packet.net. They appear to be
>
> the only provider that not only manages hardware for you, but provides
>
> an API for accessing and provisioning it.
>
>
>
> I agree with you regarding the convenience of a service like packet.net.
> They are showing that it is possible to have automated bare metal
> deployments that can be as easy to operate as fully virtualized ones.
>
> Packet.net provides some very basic APIs to control the switches that run
> the virtual network attached to their blades and is currently considering
> extensions to their APIs to support more complex bare metal deployments
> (e.g. openstack or k8s) based on input from a CNCF demo project which uses
> NFVbench.
>
>
>
> It would be great if OPNFV labs can have a unified programmable L2 network
> that works on multiple pods – similar to what packet.net provides: having
> a way to program the switches associated to a pod deployment without having
> to open a ticket and wait for someone to go configure the switches which is
> just too slow.
>
>
LYL > This is basically what the LaaS project is providing in beta right
now, where PTLs can book multiple bare metal servers and configure the L2
network between those servers.  Each server has mutliple NICs / ports,
providing for a fair amount of flexibility.



>
>
>
>
> What I'd like to know is if there are other options out there for
>
> using/paying for hardware like this? Perhaps one of the other cloud
>
> providers (IBM, HP, etc) have a service we haven't heard of?
>
> Or are there other ideas of how we can manage this need?
>
> ‘
>
> The lack of consistency across OPNFV pods (different switches, different
> way to connect them to servers, different NICs…) makes the development of a
> unified programmable L2 service very difficult and likely too costly.
>
> I do not think there is any vendor today that provides a service similar
> to packet.net that would work on any variety of hardware – even packet.net
> can only achieve this by streamlining their hardware.
>
>
>

LYL > Completely agree on the huge amount of hardware (i.e. different
switches) and their different configuration interface (i.e. CLI, Openflow,
VxLAN, other).  What we started with in the LaaS project was to construct
the information model for the "Pod" that gets constructed from the
dashboard, when the user creates the pod.  From there, it goes through a
couple of translations, where the the networking side becomes more and more
specific to the actual lab and hardware.  Think of that in terms of,
selection of available VLANs, switch ports, etc.

What could be reused here is all of the information models used by the
dashboard, and possibly some of the lab translation code.  This could help
to simplify some of the work an individual hosting a lab down to the
creation of "drivers" for their specific physical layer topology and their
specific switching hardware (i.e. a CLI type driver driver, Openflow
configuration, etc.).



> Short of using a service like packet.net (I guess for cost reasons), the
> only practical option is to simplify the underpinning of the pods that make
> up OPNFV labs: standardize on very few switches, standardize on how the
> switches are connected to the servers then adjust the deployers to install
> on such HW base. This is basically trying to replicate what packet.net is
> doing. OPNFV can start this effort on one specific pod – pick one that is
> the most versatile/consistently configured/wrired (I think the Intel lap in
> Portland would be a good candidate), then hope to support additional pods
> with different HW variations later. It is clearly always good to support a
> variety of HW platforms but it comes at a cost.
>

LYL > I think limiting the selection of hardware, will in the long run,
limit the value of the systems / labs, because it could eliminating things
like hardware acceleration, etc. It would basically reduce everything to
the common denominator



>
>
> I’m not too familiar with the details of the PDF project but I think this
> goes beyond what PDF covers.
>

LYL > Yes, our information model had to include more than the PDF.  But,
you can generate the PDF directly from that information, which is also very
useful.  Especially if all of the installers could use that PDF in their
process (they don't all use it today).  Similarly, additional
standardization on the other information required for a deployment, i.e.
scenarios / IDF will also make that process easier to support between
different labs / "pods".



>
>
> I would also like to point out that such programmable L2 service can be
> used by any cloud OS deployment (openstack or k8s or even plain vswitch),
> any deployer and any vswitch technology, can also work for SR-IOV and will
> benefit any project that involves any form of data plane: VSPERF, NFVbench…
>
> The lack of a programmable pod-level L2 infrastructure is one of the most
> glaring gap in the OPNFV infra to my point of view.
>
> Having an OPNFV reference SW platform would be greatly enhanced by
> specifying clearly the underlying L2 infrastructure and being able to
> configure the L2 plane from CI.
>
>
>
LYL > As long as, we're not trying to allow the Cloud OS to make changes to
the L2 hardware, which might be supporting multiple Cloud OS instances
running in parallel.  Otherwise, you run into a premissions / ACL type
nightmare within the lab.  We are a ways away from that capability, in
terms of a standardized approach, that could work between labs, with
multiple Clouds OS instances installed in parallel, etc.



> Regards,
>
>
>
>    Alec
>
>
>
>
>
> Happy to hear any and all input!
>
>
>
> Regards,
>
> Trevor Bramwell
>
>
>
> [1]
> https://secure-web.cisco.com/1eJTPje59c8o8Y4GGogvzVUrvUwNkWAL_93E_Sf0jVsAVfAb5wN7sGR7S5xm-LzctgUwMjo7TpRuHHo-77HYGc5A-ihbZ7vjsges2YIyX2c5LNGIQnXuNv1j4VrvrbpH25Z7a2_TPa3raeHzSdT3gNOfJipo_zEx4DLBS0vOyZSmUQv0mx4zeHxhcBSK83ltATxUVrMsRtmUgoITKdnxQng_BYnk4IPw40SsOUfbYYWi7yNtHdsdbLnXK1Dk2RBsXn6UjkrXtunrcWrJIdEu1V-voDLaxrkILvEF74QdBN5ohdlWQed4qWW6LV7vPRSn9-TjVdzZoYEiGF80p9YTmJg/https%3A%2F%2Fetherpad.opnfv.org%2Fp%2Finfraevolution
>
> [2]
> https://secure-web.cisco.com/15AA0N4a84zu6rjKyfTody7N1CBv1ae93AO6aYZlHcW5p30Aded7H3So-XqqQ8SjUGtOaI7_GDoc6LObhRCmtVF6ItUmOU9gdS4-_avEpsI1StGcIjf8q9BqEKI-DTbkkpf-ZZPsRDI8J_8RAt91bm58OZQL9po2FPz4K0CI9nDf3TAWMPSUOp1sbbUd8hqc5KgHOMGHpdBzxj5unlI6WQA0lo1C1b9KXJfnT1L5Rg9AtzL-Kqtf_G_0iDoIePIhXt3euQHBI4pm7OJn4et1YQR8LACAtfCI0PfFZuabxMOnwK90FvT6mkUB3GAnIVbBW/https%3A%2F%2Faws.amazon.com%2Fec2%2Finstance-types%2Fi3%2F
>
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#22518):
> https://lists.opnfv.org/g/opnfv-tech-discuss/message/22518
> Mute This Topic: https://lists.opnfv.org/mt/28632364/923460
> Group Owner: [email protected]
> Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  [
> [email protected]]
> -=-=-=-=-=-=-=-=-=-=-=-
>


-- 
*******************************************************************************
*Lincoln Lavoie*
Senior Engineer, Broadband Technologies

<https://www.iol.unh.edu>
www.iol.unh.edu
21 Madbury Rd., Ste. 100, Durham, NH 03824
Mobile: +1-603-674-2755
[email protected]
<http://www.facebook.com/UNHIOL#>   <https://twitter.com/#!/UNH_IOL>
<http://www.linkedin.com/company/unh-interoperability-lab>

Ars sine scientia nihil est! -- Art without science is nothing.
Scientia sine ars est vacua! -- Science without art is empty.

BBF Gfast Certified Product List <https://bbf-gfast-cert.iol.unh.edu/>
*******************************************************************************
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22519): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22519
Mute This Topic: https://lists.opnfv.org/mt/28632364/21656
Group Owner: [email protected]
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to