> -----Original Message----- > From: Flavio Percoco [mailto:fla...@redhat.com] > Sent: Tuesday, April 12, 2016 8:32 AM > To: OpenStack Development Mailing List (not for usage questions) > <openstack-dev@lists.openstack.org> > Cc: foundat...@lists.openstack.org > Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One > Platform – Containers/Bare Metal? (Re: Board of Directors Meeting) > > On 11/04/16 18:05 +0000, Amrith Kumar wrote: > >Adrian, thx for your detailed mail. > > > > > > > >Yes, I was hopeful of a silver bullet and as we’ve discussed before (I > >think it was Vancouver), there’s likely no silver bullet in this area. > >After that conversation, and some further experimentation, I found that > >even if Trove had access to a single Compute API, there were other > >significant complications further down the road, and I didn’t pursue the > project further at the time. > > > > Adrian, Amrith, > > I've spent enough time researching on this area during the last month and > my conclusion is pretty much the above. There's no silver bullet in this > area and I'd argue there shouldn't be one. Containers, bare metal and VMs > differ in such a way (feature-wise) that it'd not be good, as far as > deploying databases goes, for there to be one compute API. Containers > allow for a different deployment architecture than VMs and so does bare > metal. >
[amrith] That is an interesting observation if we were developing a unified compute service. However, the issue for a project like Trove is not whether Containers, VM's and bare-metal are the same of different, but rather what a user is looking to get out of a deployment of a database in each of those compute environments. > > >We will be discussing Trove and Containers in Austin [1] and I’ll try > >and close the loop with you on this while we’re in Town. I still would > >like to come up with some way in which we can offer users the option of > >provisioning database as containers. > > As the person leading this session, I'm also looking forward to providing > such provisioning facilities to Trove users. Let's do this. > [amrith] In addition to hearing about how you plan to solve the problem, I would like to know what problem it is that you are planning to solve. Putting a database in a container is a solution, not a problem (IMHO). But the scope of this thread is broader so I'll stop at that. > Cheers, > Flavio > > > > >Thanks, > > > > > > > >-amrith > > > > > > > >[1] https://etherpad.openstack.org/p/trove-newton-summit-container > > > > > > > >From: Adrian Otto [mailto:adrian.o...@rackspace.com] > >Sent: Monday, April 11, 2016 12:54 PM > >To: OpenStack Development Mailing List (not for usage questions) > ><openstack-dev@lists.openstack.org> > >Cc: foundat...@lists.openstack.org > >Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] > >One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting) > > > > > > > >Amrith, > > > > > > > >I respect your point of view, and agree that the idea of a common > >compute API is attractive… until you think a bit deeper about what that > >would mean. We seriously considered a “global” compute API at the time > >we were first contemplating Magnum. However, what we came to learn > >through the journey of understanding the details of how such a thing > >would be implemented, that such an API would either be (1) the lowest > >common denominator (LCD) of all compute types, or (2) an exceedingly > complex interface. > > > > > > > >You expressed a sentiment below that trying to offer choices for VM, > >Bare Metal (BM), and Containers for Trove instances “adds considerable > complexity”. > >Roughly the same complexity would accompany the use of a comprehensive > >compute API. I suppose you were imagining an LCD approach. If that’s > >what you want, just use the existing Nova API, and load different > >compute drivers on different host aggregates. A single Nova client can > >produce VM, BM (Ironic), and Container (lbvirt-lxc) instances all with > >a common API (Nova) if it’s configured in this way. That’s what we do. > >Flavors determine which compute type you get. > > > > > > > >If what you meant is that you could tap into the power of all the > >unique characteristics of each of the various compute types (through > >some modular extensibility framework) you’ll likely end up with > >complexity in Trove that is comparable to integrating with the native > >upstream APIs, along with the disadvantage of waiting for OpenStack to > >continually catch up to the pace of change of the various upstream > >systems on which it depends. This is a recipe for disappointment. > > > > > > > >We concluded that wrapping native APIs is a mistake, particularly when > >they are sufficiently different than what the Nova API already offers. > >Containers APIs have limited similarities, so when you try to make a > >universal interface to all of them, you end up with a really > >complicated mess. It would be even worse if we tried to accommodate all > >the unique aspects of BM and VM as well. Magnum’s approach is to offer > >the upstream native API’s for the different container orchestration > >engines (COE), and compose Bays for them to run on that are built from > >the compute types that OpenStack supports. We do this by using > >different Heat orchestration templates (and conditional templates) to > >arrange a COE on the compute type of your choice. With that said, there > >are still gaps where not all storage or network drivers work with > >Ironic, and there are non-trivial security hurdles to clear to safely use > Bays composed of libvirt-lxc instances in a multi-tenant environment. > > > > > > > >My suggestion to get what you want for Trove is to see if the cloud has > >Magnum, and if it does, create a bay with the flavor type specified for > >whatever compute type you want, and then use the native API for the COE > >you selected for that bay. Start your instance on the COE, just like > >you use Nova today. This way, you have low complexity in Trove, and you > >can scale both the number of instances of your data nodes (containers), > >and the infrastructure on which they run (Nova instances). > > > > > > > >Regards, > > > > > > > >Adrian > > > > > > > > > > > > > > > > On Apr 11, 2016, at 8:47 AM, Amrith Kumar <amr...@tesora.com> wrote: > > > > > > > > Monty, Dims, > > > > I read the notes and was similarly intrigued about the idea. In > particular, > > from the perspective of projects like Trove, having a common Compute > API is > > very valuable. It would allow the projects to have a single view of > > provisioning compute, as we can today with Nova and get the benefit > of bare > > metal through Ironic, VM's through Nova VM's, and containers through > > nova-docker. > > > > With this in place, a project like Trove can offer database-as-a- > service on > > a spectrum of compute infrastructures as any end-user would expect. > > Databases don't always make sense in VM's, and while containers are > great > > for quick and dirty prototyping, and VM's are great for much more, > there > > are databases that will in production only be meaningful on bare- > metal. > > > > Therefore, if there is a move towards offering a common API for VM's, > > bare-metal and containers, that would be huge. > > > > Without such a mechanism, consuming containers in Trove adds > considerable > > complexity and leads to a very sub-optimal architecture (IMHO). FWIW, > a > > working prototype of Trove leveraging Ironic, VM's, and nova-docker > to > > provision databases is something I worked on a while ago, and have > not > > revisited it since then (once the direction appeared to be Magnum for > > containers). > > > > With all that said, I don't want to downplay the value in a container > > specific API. I'm merely observing that from the perspective of a > consumer > > of computing services, a common abstraction is incredibly valuable. > > > > Thanks, > > > > -amrith > > > > > > > > -----Original Message----- > > From: Monty Taylor [mailto:mord...@inaugust.com] > > Sent: Monday, April 11, 2016 11:31 AM > > To: Allison Randal <alli...@lohutok.net>; Davanum Srinivas > > <dava...@gmail.com>; foundat...@lists.openstack.org > > Cc: OpenStack Development Mailing List (not for usage questions) > > <openstack-dev@lists.openstack.org> > > Subject: Re: [openstack-dev] [OpenStack Foundation] > [board][tc][all] > > One > > Platform – Containers/Bare Metal? (Re: Board of Directors > > Meeting) > > > > On 04/11/2016 09:43 AM, Allison Randal wrote: > > > > > > On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas < > > dava...@gmail.com> > > > > wrote: > > > > > > Reading unofficial notes [1], i found one topic very > > interesting: > > One Platform – How do we truly support containers and > bare > > metal > > under a common API with VMs? (Ironic, Nova, adjacent > > communities e.g. > > Kubernetes, Apache Mesos etc) > > > > Anyone present at the meeting, please expand on those > few > > notes on > > etherpad? And how if any this feedback is getting > back to > > the > > projects? > > > > > > It was really two separate conversations that got conflated > in the > > summary. One conversation was just being supportive of bare > metal, > > VMs, and containers within the OpenStack umbrella. The other > > conversation started with Monty talking about his work on > shade, > > and > > how it wouldn't exist if more APIs were focused on the way > users > > consume the APIs, and less an expression of the > implementation > > details > > > > of each project. > > > > > > OpenStackClient was mentioned as a unified CLI for OpenStack > > focused > > more on the way users consume the CLI. (OpenStackSDK wasn't > > mentioned, > > but falls in the same general category of work.) > > > > i.e. There wasn't anything new in the conversation, it was > more a > > matter of the developers/TC members on the board sharing > > information > > about work that's already happening. > > > > > > I agree with that - but would like to clarify the 'bare metal, > VMs and > > containers' part a bit. (an in fact, I was concerned in the > meeting > > that > > the messaging around this would be confusing because we > 'supporting > > bare > > metal' and 'supporting containers' mean two different things but > we use > > one phrase to talk about it. > > > > It's abundantly clear at the strategic level that having > OpenStack be > > able > > to provide both VMs and Bare Metal as two different sorts of > resources > > (ostensibly but not prescriptively via nova) is one of our > advantages. > > We > > wanted to underscore how important it is to be able to do that, > and > > wanted > > to underscore that so that it's really clear how important it is > any > > time > > the "but cloud should just be VMs" sentiment arises. > > > > The way we discussed "supporting containers" was quite different > and > > was > > not about nova providing containers. Rather, it was about > reaching out > > to > > our friends in other communities and working with them on making > > OpenStack > > the best place to run things like kubernetes or docker swarm. > > Those are systems that ultimately need to run, and it seems that > good > > integration (like kuryr with libnetwork) can provide a really > strong > > story. I think pretty much everyone agrees that there is not much > value > > to > > us or the world for us to compete with kubernetes or docker. > > > > So, we do want to be supportive of bare metal and containers - > but the > > specific _WAY_ we want to be supportive of those things is > different > > for > > each one. > > > > Monty > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: openstack-dev-requ...@lists.openstack.org? > > subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > requ...@lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > >_______________________________________________________________________ > >___ OpenStack Development Mailing List (not for usage questions) > >Unsubscribe: > >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > @flaper87 > Flavio Percoco __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev