On 01/08/2014 06:57 PM, Prasad Vellanki wrote:
Clint & Steve
One scenario we are trying to see is whether and how Heat software-config enables deployment of images available from third party as virtual appliances, providing network, security or acceleration capabilities. The vendor in some cases might not allow rebuilding and/or may not have the cloud init capability.Sometimes changes to the image could run into issues with licensing. Bootstrapping in such situations is generally done via rest api or ssh once the appliance boots up where one can bootstrap it further.

We are looking at how to automate deployment of such service functions using new configuration and deployment model in Heat which we really like.

One option is that software-config can provide an option in Heat to trigger bootstrapping that can be done from outside rather than inside, as done by cloud-init, and does bootstrapping of appliances using ssh and/or rest.

Another option is there could be an agent outside that recognizes this kind of service coming up and then inform Heat to go to next state to configure the deployed resource. This is more like a proxy model.

thanks
prasadv


Prasad,

Just to clarify, you want Heat to facilitate bootstrapping a black box (third-party virtual appliance) that has no consistent bootstrapping interface (such as cloud-init)? The solution you propose I think goes along the lines of having Heat notify an out-of-vm bootstrapping system (such as SSH) to connect to the black box and execute the bootstrapping.

If so, I see a problem with this approach:
Heat requires the running of commands inside the virtual machine to know when the virtual machine is done bootstrapping, for whatever definition of bootstrapping you use (OS booted, vs OS loaded and ready to provide service).

This could be handled by modifying the init scripts to signal the end of booting, but one constraint you mentioned was that images may not be modified.

Another approach that could be used today is to constantly connect to the SSH port of the VM until you receive a connection. The problem with this approach is who loads the ssh keys into the image? SSH key injection is currently handled by the bootstrapping process. This is a chicken-egg problem and a fundamental reason why bootstrapping should be done internally to the virtual machine driven by Heat. Assuming this model were used, a notification that the booting process has completed is only an optimization to indicate when SSH harassment should begin :)

One possible workaround, mentioned in your use case is that the virtual appliance contacts a REST server (to obtain bootstrapping information, including possibly SSH keys). Since I assume these virtual appliances come from different vendors, this would result in REST bootstrapping server proliferation which is bad for operators as each server has to be secure-ified, scale-ified, HA-ifed, and documented.

The path of least resistance in this case seems to be to influence the appliance vendors to adopt cloud-init rather then do unnatural acts inside infrastructure to support appliance vendors who are unwilling to conform to Open Source choices made by a broad community of technology experts (in this case, I mean not just the OpenStack community, but rather nearly every cloud vendor has made cloudinit central to their solutions). Since the appliance vendors will add cloud-init to their image sooner or later due to operator / customer pressure, it is also the right choice today.

It is as simple as adding one package to the built image. In exchange, from a bootstrapping perspective, their customers get a simple secure reliable scalable highly available experience on OpenStack and likely other IAAS platforms.

Regards
-steve


On Tue, Jan 7, 2014 at 11:40 AM, Clint Byrum <cl...@fewbar.com <mailto:cl...@fewbar.com>> wrote:

    I'd say it isn't so much cloud-init that you need, but "some kind
    of bootstrapper". The point of hot-software-config is to help with
    in-instance orchestration. That's not going to happen without some way
    to push the desired configuration into the instance.

    Excerpts from Susaant Kondapaneni's message of 2014-01-07 11:16:16
    -0800:
    > We work with images provided by vendors over which we do not
    always have
    > control. So we are considering the cases where vendor image does
    not come
    > installed with cloud-init. Is there a way to support heat
    software config
    > in such scenarios?
    >
    > Thanks
    > Susaant
    >
    > On Mon, Jan 6, 2014 at 4:47 PM, Steve Baker <sba...@redhat.com
    <mailto:sba...@redhat.com>> wrote:
    >
    > >  On 07/01/14 06:25, Susaant Kondapaneni wrote:
    > >
    > >  Hi Steve,
    > >
    > >  I am trying to understand the software config implementation.
    Can you
    > > clarify the following:
    > >
    > >  i. To use Software config and deploy in a template, instance
    resource
    > > MUST always be accompanied by user_data. User_data should
    specify how to
    > > bootstrap CM tool and signal it. Is that correct?
    > >
    > >   Yes, currently the user_data contains cfn-init formatted
    metadata which
    > > tells os-collect-config how to poll for config changes. What
    happens when
    > > new config is fetched depends on the os-apply-config templates and
    > > os-refresh-config scripts which are already on that image (or
    set up with
    > > cloud-init).
    > >
    > >  ii. Supposing we were to use images which do not have cloud-init
    > > packaged in them, (and a custom CM tool that won't require
    bootstrapping on
    > > the instance itself), can we still use software config and
    deploy resources
    > > to deploy software on such instances?
    > >
    > >   Currently os-collect-config is more of a requirement than
    cloud-init,
    > > but as Clint said cloud-init does a good job of boot config so
    you'll need
    > > to elaborate on why you don't want to use it.
    > >
    > >  iii. If ii. were possible who would signal the deployment
    resource to
    > > indicate that the instance is ready for the deployment?
    > >
    > > os-collect-config polls for the deployment data, and triggers the
    > > resulting deployment/config changes. One day this may be
    performed by a
    > > different agent like the unified agent that has been
    discussed. Currently
    > > os-collect-collect polls via a heat-api-cfn metadata call.
    This too may be
    > > done in any number of ways in the future such as messaging or
    long-polling.
    > >
    > > So you *could* consume the supplied user_data to know what to
    poll for
    > > subsequent config changes without cloud-init or
    os-collect-config, but you
    > > would have to describe what you're doing in detail for us to
    know if that
    > > sounds like a good idea.
    > >
    > >
    > >
    > >  Thanks
    > > Susaant
    > >
    > >
    > > On Fri, Dec 13, 2013 at 3:46 PM, Steve Baker
    <sba...@redhat.com <mailto:sba...@redhat.com>> wrote:
    > >
    > >>  I've been working on a POC in heat for resources which
    perform software
    > >> configuration, with the aim of implementing this spec
    > >>
    https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec
    > >>
    > >> The code to date is here:
    > >> https://review.openstack.org/#/q/topic:bp/hot-software-config,n,z
    > >>
    > >> What would be helpful now is reviews which give the architectural
    > >> approach enough of a blessing to justify fleshing this POC
    out into a ready
    > >> to merge changeset.
    > >>
    > >> Currently it is possible to:
    > >> - create templates containing OS::Heat::SoftwareConfig and
    > >> OS::Heat::SoftwareDeployment resources
    > >> - deploy configs to OS::Nova::Server, where the deployment
    resource
    > >> remains in an IN_PROGRESS state until it is signalled with
    the output values
    > >> - write configs which execute shell scripts and report back
    with output
    > >> values that other resources can have access to.
    > >>
    > >> What follows is an overview of the architecture and
    implementation to
    > >> help with your reviews.
    > >>
    > >> REST API
    > >> ========
    > >> Like many heat resources, OS::Heat::SoftwareConfig and
    > >> OS::Heat::SoftwareDeployment are backed by "real" resources
    that are
    > >> invoked via a REST API. However in this case, the API that is
    called is
    > >> heat itself.
    > >>
    > >> The REST API for these resources really just act as
    structured storage
    > >> for config and deployments, and the entities are managed via
    the REST paths
    > >> /{tenant_id}/software_configs and
    /{tenant_id}/software_deployments:
    > >>
    > >>
    <https://review.openstack.org/#/c/58878/7/heat/api/openstack/v1/__init__.py>
    > >> https://review.openstack.org/#/c/58878/
    > >> RPC layer of REST API:
    > >> https://review.openstack.org/#/c/58877/
    > >> DB layer of REST API:
    > >> https://review.openstack.org/#/c/58876
    > >> heatclient lib access to REST API:
    > >> https://review.openstack.org/#/c/58885
    > >>
    > >> This data could be stored in a less structured datastore like
    swift, but
    > >> this API has a couple of important implementation details
    which I think
    > >> justify it existing:
    > >> - SoftwareConfig resources are immutable once created. There
    is no update
    > >> API to modify an existing config. This gives confidence that
    a config can
    > >> have a long lifecycle without changing, and a certainty of
    what exactly is
    > >> deployed on a server with a given config.
    > >> - Fetching all the deployments and configs for a given server
    is an
    > >> operation done repeatedly throughout the lifecycle of the
    stack, so is
    > >> optimized to be able to do in a single operation. This is
    called by using
    > >> the deployments index API call,
    > >> /{tenant_id}/software_deployments?server_id=<server_id>. The
    resulting list
    > >> of deployments include the their associated config data[1].
    > >>
    > >> OS::Heat::SoftwareConfig resource
    > >> =================================
    > >> OS::Heat::SoftwareConfig can be used directly in a template,
    but it may
    > >> end be more frequently used in a resource provider template
    which provides
    > >> a resource aimed at a particular configuration management tool.
    > >>
    > >>
    
http://docs-draft.openstack.org/79/58879/7/check/gate-heat-docs/911a250/doc/build/html/template_guide/openstack.html#OS::Heat::SoftwareConfig
    > >> The contents of the config property will depend on the CM
    tool being
    > >> used, but at least one value in the config map will be the
    actual script
    > >> that the CM tool invokes.  An inputs and outputs schema is
    also defined
    > >> here. The group property is used when the deployments data is
    actually
    > >> delivered to the server (more on that later).
    > >>
    > >> Since a config is immutable, any changes to a
    OS::Heat::SoftwareConfig on
    > >> stack update result in replacement.
    > >>
    > >> OS::Heat::SoftwareDeployment resource
    > >> =====================================
    > >> OS::Heat::SoftwareDeployment joins a OS::Heat::SoftwareConfig
    resource
    > >> with a OS::Nova::Server resource. It allows server-specific
    input values to
    > >> be specified that map to the OS::Heat::SoftwareConfig inputs
    schema. Output
    > >> values that are signaled to the deployment resource are
    exposed as resource
    > >> attributes, using the names specified in the outputs schema. The
    > >> OS::Heat::SoftwareDeployment resource remains in an
    IN_PROGRESS state until
    > >> it receives a signal (containing any outputs) from the server.
    > >>
    > >>
    
http://docs-draft.openstack.org/79/58879/7/check/gate-heat-docs/911a250/doc/build/html/template_guide/openstack.html#OS::Heat::SoftwareDeployment
    > >>
    > >> A deployment has its own actions and statuses that are
    specific to what a
    > >> deployment does, and OS::Heat::SoftwareDeployment maps this
    to heat
    > >> resource statuses and actions:
    > >> actions:
    > >> DEPLOY -> CREATE
    > >> REDEPLOY -> UPDATE
    > >> UNDEPLOY -> DELETE
    > >>
    > >> status (these could use some bikeshedding):
    > >> WAITING -> IN_PROGRESS
    > >> RECEIVED -> COMPLETE
    > >> FAILED -> FAILED
    > >>
    > >> In the config outputs schema there is a special flag for
    error_output. If
    > >> the signal response contains any value for any of these
    error_output
    > >> outputs then the deployment resource is put into the FAILED
    state.
    > >>
    > >> The SoftwareDeployment class subclasses SignalResponder which
    means that
    > >> a SoftwareDeployment creates an associated user and ec2
    keypair. Since the
    > >> SoftwareDeployment needs to use the resource_id for the
    deployment resource
    > >> uuid, the user_id needs to be stored in resource-date
    instead. This non-wip
    > >> change enables that:
    > >> https://review.openstack.org/#/c/61902/
    > >>
    > >> During create, the deployment REST API is polled until status
    goes from
    > >> WAITING to RECEIVED. When handle_signal is called, the
    deployment is
    > >> updated via the REST API to set the status to RECEIVED (or
    FAILED), along
    > >> with any output values that were received.
    > >>
    > >> One alarming consequence of having a deployments API is that
    any tenant
    > >> user can create a deployment for any heat-created nova server
    and that
    > >> software will be deployed to that server, which is, um, powerful.
    > >>
    > >> There will need to be a deployment policy (probably an
    OS::Nova::Server
    > >> property) which limits to scope of what deployments are
    allowed on that
    > >> server. This could default to deployments in the same stack,
    but could
    > >> still allow deployments from anywhere.
    > >>
    > >> OS::Nova::Server support
    > >> ========================
    > >> https://review.openstack.org/#/c/58880
    > >> A new user_data_format=SOFTWARE_CONFIG is currently used to
    denote that
    > >> this server is configured via software config deployments. Like
    > >> user_data_format=HEAT_CFNTOOLS, nova_utils.build_userdata is
    used to build
    > >> the cloud-init parts required to support software config.
    However like
    > >> user_data_format=RAW anything specified in user_data will be
    parsed as
    > >> cloud-init data. If user_data is multi-part data then the
    parts will be
    > >> appended to the parts created in nova_utils.build_userdata.
    > >>
    > >> The agent used currently is os-collect-config. This is typically
    > >> configured to poll for metadata from a particular heat
    resource via the CFN
    > >> API using the configured ec2 keypair. In the current
    implementation the
    > >> resource which is polled is the OS::Nova::Server itself,
    since this is the
    > >> only resource known to exist at server boot time (deployment
    resources
    > >> depend on server resources, so have not been created yet).
    The ec2 keypair
    > >> comes from a user created implicitly with the server (similar to
    > >> SignalResponder resources). This means the template author
    doesn't need to
    > >> include User/AccessKey/AccessPolicy resources in their
    templates just to
    > >> enable os-collect-config metadata polling.
    > >>
    > >> Until now, polling the metadata for a resource just returns
    the metadata
    > >> which has been stored in the stack resource database. This
    implementation
    > >> changes metadata polling to actually query the deployments
    API to return
    > >> the latest deployments data. This means deployment state can
    be stored in
    > >> one place, and there is no need to keep various metadata
    stores updated
    > >> with any changed state.
    > >>
    > >> An actual template
    > >> ==================
    > >> http://paste.openstack.org/show/54988/
    > >> This template contains:
    > >> - a config resource
    > >> - 2 deployments which deploy that config with 2 different
    sets of inputs
    > >> - stack outputs which output the results of the deployments
    > >> - a server resource
    > >> - an os-refresh-config script delivered via cloud-config[2] which
    > >> executes config scripts with deployment inputs and signals
    outputs to the
    > >> provided webhook.
    > >>
    > >> /opt/stack/os-config-refresh/configure.d/55-heat-config-bash
    is a hook
    > >> specific for performing configuration via shell scripts, and
    only acts on
    > >> software config which has group=Heat::Shell. Each
    configuration management
    > >> tool will have its own hook, and will act on its own group
    namespace. Each
    > >> configuration management tool will also have its own way of
    passing inputs
    > >> and outputs. The hooks job is to invoke the CM tool with the
    given inputs
    > >> and script, then extract the outputs and signal heat.
    > >>
    > >> The server needs to have the CM tool and the hook already
    installed,
    > >> either by building a golden image or by using cloud-config
    during boot.
    > >>
    > >> Next steps
    > >> ==========
    > >> There is a lot left to do and I'd like to spread the
    development load.
    > >> What happens next entirely depends on feedback to this POC,
    but here is my
    > >> ideal scenario:
    > >> - any feedback which causes churn on many of the current
    changes I will
    > >> address
    > >> - a volunteer is found to take the REST API/RPC/DB/heatclient
    changes and
    > >> make them ready to merge
    > >> - we continue to discuss and refine the resources, the changes to
    > >> OS::Nova::Server, and the example shell hook
    > >> - volunteers write hooks for different CM tools, Chef and
    Puppet hooks
    > >> will need to be attempted soon to validate this approach.
    > >>
    > >> Vaguely related changes include:
    > >> - Some solution for specifying cloud-init config, either the
    intrinsic
    > >> functions or cloud-init heat resources
    > >> - Some heatclient file inclusion mechanism - writing that
    python hook in
    > >> a heat yaml template was a bit painful ;)
    > >>
    > >> Trying for yourself
    > >> ===================
    > >> - Using diskimage-builder, create an ubuntu image with
    > >> tripleo-image-elements os-apply-config, os-refresh-config and
    > >> os-collect-config
    > >> - Create a local heat branch containing
    > >>
    https://review.openstack.org/#/q/topic:bp/cloud-init-resource,n,z and
    > >> https://review.openstack.org/#/q/topic:bp/hot-software-config,n,z
    > >> - launch the above template with your created image
    > >>
    > >> cheers
    > >>
    > >> [1] https://review.openstack.org/#/c/58877/7/heat/engine/api.py
    > >> [2] This relies on these not-merged intrinsic functions
    > >> https://review.openstack.org/#/q/topic:bp/cloud-init-resource,n,z
    > >>
    > >> _______________________________________________
    > >> OpenStack-dev mailing list
    > >> OpenStack-dev@lists.openstack.org
    <mailto:OpenStack-dev@lists.openstack.org>
    > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    > >>
    > >>
    > >
    > >
    > > _______________________________________________
    > > OpenStack-dev mailing
    
listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
    > >
    > >
    > >
    > > _______________________________________________
    > > OpenStack-dev mailing list
    > > OpenStack-dev@lists.openstack.org
    <mailto:OpenStack-dev@lists.openstack.org>
    > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    > >
    > >

    _______________________________________________
    OpenStack-dev mailing list
    OpenStack-dev@lists.openstack.org
    <mailto:OpenStack-dev@lists.openstack.org>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to