Hey Trevor, Tim, I think there was some misunderstanding when I brought up Packet.net that made some people think we'd be replacing (or overtime phase out) Pharos labs. That was not my intention, and I appreciate you two clarifying that point.
Regarding special hardware, you make a very good point Trevor. Adding into that there are a lot of logistics, asset management, and legal issues companies would have to manage to enable 3rd parties to host their hardware, and its doubtful any would want to jump through those hoops. As I mentioned in the TSC call, enrolling Intel's Lab in LaaS would definitely help with not only tracking the utilization of the lab, but remove the need to ask projects to reapply for access, and presumably reduce the cost (in terms of hours) to manage the lab. If you have details on how the ONAP, Akraino, and StarlingX labs differ in their setup from OPNFV, it would be great to collect those and feed them back into the LaaS requirements. LaaS would also definitely benefit from the contributions of those involved in setting up the labs, since the offering is for more than just OPNFV. Regards, Trevor Bramwell On Fri, Dec 07, 2018 at 06:55:00PM +0000, Cooper, Trevor wrote: > I agree with Tim and furthermore the community Pharos labs are underutilized. > The Intel Pharos lab alone hosts about 72 servers of current generation Xeon > which would be very expensive to replicate in a 3rd party hosted environment. > > As a performance project VSPERF has had experience with hosting "special" > hardware ... traffic generators, platform/processor skus, BIOS features, > hardware configurations (e.g. NICs) and network setup are all important. This > kind of setup ... new hardware with particular configurations (and usually > issues) often requires some "hands on" debugging and effective support means > good communication between the lab admins and the project. Throw SmartNICs > and FPGA accelerators into the mix and I am skeptical that a large 3rd party > hosting company will deliver. In many cases this cutting-edge hardware will > be severely constrained and hard to justify sending off to a 3rd party lab. > > I think we are also underestimating the potential value of the Pharos labs > (as an output of OPNFV vs a burden of sunk cost) to the companies that host > them ... as a way of proving integration in the open source ecosystem (Tim's > point). If we can offer the LaaS tooling and support the community Pharos > labs with all our Infra activities and expertise IMO it would be huge value > for companies internal purposes as well as for their open source objectives. > For sure some synergy between LNF projects is needed on lab infrastructure > ... we currently also host an independent ONAP lab ... and now Akraino, > StarlingX etc. are trying to set lab infrastructures. > > /Trevor C > > > > > > -----Original Message----- > From: [email protected] <[email protected]> > On Behalf Of Tim Irnich > Sent: Friday, December 7, 2018 1:32 AM > To: Trevor Bramwell <[email protected]>; > [email protected] > Subject: Re: [opnfv-tech-discuss] Infra Evolution: Labs > > We seem to be forgetting that a significant fraction of our hardware > comes through donated Pharos labs. I think it is important to maintain > and evolve the possibility for companies to hook up labs with hardware > of their choosing (within the limits of the Pharos spec) to the OPNFV > federated labs infrastructure and run the OPNFV stacks on their hardware. > > Particularly in the light of the increasing importance of hardware > accelerators, Pharos labs have the potential to enable OPNFV to become a > proving ground for integration of such hardware into the broader open > source ecosystem. This is a unique capability of OPNFV that, as far as I > am aware, exists nowhere else in the industry. > > Tim > > On 12/7/18 1:47 AM, Trevor Bramwell wrote: > > Hi all, > > > > The TSC tasked the Infra-WG to create a proposal for evolving the > > infrastructure of OPNFV, but given the low attendance to the Infra-WG > > calls I thought it best to bring this discussion to the mailing list so > > we can benefit from the wider community's experience and knowledge. > > > > To keep the discussion focused, I'd like to just look at Labs and how we > > can evolve the infrastructure there, and I'll follow up with thoughts on > > CI, code, and artifacts, after we've reached some consensus since there's a > > bit more interdependence with those components along with some > > complexity. I've collected some of the options here[1]. > > > > What we need to find (and anyone feel free to jump in if I've > > misunderstood the ask) is a scalable solution for bringing up new PODs > > for CI that doesn't require us hosting or purchasing more dedicated > > hardware. > > > > Now of the top cloud providers: AWS, Google Compute, Azure, only AWS > > provides a hardware-as-a-service offering[2]. The others recently > > enabled nested virtualization on some instance types, which may work for > > us verifying virtual deployments (with performance trade-offs > > obviously), but don't help when we want to verify against networking > > hardware. > > > > This is why I started the discussion with Packet.net. They appear to be > > the only provider that not only manages hardware for you, but provides > > an API for accessing and provisioning it. > > > > What I'd like to know is if there are other options out there for > > using/paying for hardware like this? Perhaps one of the other cloud > > providers (IBM, HP, etc) have a service we haven't heard of? > > Or are there other ideas of how we can manage this need? > > > > Happy to hear any and all input! > > > > Regards, > > Trevor Bramwell > > > > [1] https://etherpad.opnfv.org/p/infraevolution > > [2] https://aws.amazon.com/ec2/instance-types/i3/ > > > > -- > Dr.-Ing. Tim Irnich, Senior Program Manager Developer Engagement > E-Mail: [email protected] > Mobile: +49 172 2791829 > SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg)
signature.asc
Description: PGP signature
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#22540): https://lists.opnfv.org/g/opnfv-tech-discuss/message/22540 Mute This Topic: https://lists.opnfv.org/mt/28632364/21656 Group Owner: [email protected] Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub [[email protected]] -=-=-=-=-=-=-=-=-=-=-=-
