+1 from me, although I thought heat was supposed to be this thing?
Maybe there should be a 'warm' project or something ;)
Or we can call it 'bbs' for 'building block service' (obviously not
bulletin board system); ask said service to build a set of blocks into
well defined structures and let it figure out how to make that happen...
This though most definitely requires cross-project agreement though so
I'd hope we can reach that somehow (before creating a halfway done new
orchestration thing that is halfway integrated with a bunch of other
apis that do one quarter of the work in ten different ways).
Duncan Thomas wrote:
I think there's a place for yet another service breakout from nova -
some sort of like-weight platform orchestration piece, nothing as
complicated or complete as heat, nothing that touches the inside of a
VM, just something that can talk to cinder, nova and neutron (plus I
guess ironic and whatever the container thing is called) and work
through long running / cross-project tasks. I'd probably expect it to
provide a task style interface, e.g. a boot-from-new-volume call returns
a request-id that can then be polled for detailed status.
The existing nova API for this (and any other nova APIs where this makes
sense) can then become a proxy for the new service, so that tenants are
not affected. The nova apis can then be deprecated in slow time.
Anybody else think this could be useful?
On 25 September 2015 at 17:12, Andrew Laski <and...@lascii.com
<mailto:and...@lascii.com>> wrote:
On 09/24/15 at 03:13pm, James Penick wrote:
At risk of getting too offtopic I think there's an alternate
solution to
doing this in Nova or on the client side. I think we're
missing some sort
of OpenStack API and service that can handle this. Nova is
a low level
infrastructure API and service, it is not designed to handle
these
orchestrations. I haven't checked in on Heat in a while but
perhaps this
is a role that it could fill.
I think that too many people consider Nova to be *the*
OpenStack API when
considering instances/volumes/networking/images and that's
not something I
would like to see continue. Or at the very least I would
like to see a
split between the orchestration/proxy pieces and the "manage my
VM/container/baremetal" bits
(new thread)
You've hit on one of my biggest issues right now: As far as many
deployers
and consumers are concerned (and definitely what I tell my users
within
Yahoo): The value of an OpenStack value-stream (compute,
network, storage)
is to provide a single consistent API for abstracting and
managing those
infrastructure resources.
Take networking: I can manage Firewalls, switches, IP selection,
SDN, etc
through Neutron. But for compute, If I want VM I go through
Nova, for
Baremetal I can -mostly- go through Nova, and for containers I
would talk
to Magnum or use something like the nova docker driver.
This means that, by default, Nova -is- the closest thing to a
top level
abstraction layer for compute. But if that is explicitly against
Nova's
charter, and Nova isn't going to be the top level abstraction
for all
things Compute, then something else needs to fill that space.
When that
happens, all things common to compute provisioning should come
out of Nova
and move into that new API. Availability zones, Quota, etc.
I do think Nova is the top level abstraction layer for compute. My
issue is when Nova is asked to manage other resources. There's no
API call to tell Cinder "create a volume and attach it to this
instance, and create that instance if it doesn't exist." And I'm
not sure why the reverse isn't true.
I want Nova to be the absolute best API for managing compute
resources. It's when someone is managing compute and volumes and
networks together that I don't feel that Nova is the best place for
that. Most importantly right now it seems that not everyone is on
the same page on this and I think it would be beneficial to come
together and figure out what sort of workloads the Nova API is
intending to provide.
-James
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
--
Duncan Thomas
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev