All, Starting this thread as a follow-up to a strongly negative reaction by the Ironic PTL to my patches[1] adding initial Heat->Ironic integration, and subsequent very detailed justification and discussion of why they may be useful in this spec[2].
Back in Atlanta, I had some discussions with folks interesting in making "ready state"[3] preparation of bare-metal resources possible when deploying bare-metal nodes via TripleO/Heat/Ironic. The initial assumption is that there is some discovery step (either automatic or static generation of a manifest of nodes), that can be input to either Ironic or Heat. Following discovery, but before an undercloud deploying OpenStack onto the nodes, there are a few steps which may be desired, to get the hardware into a state where it's ready and fully optimized for the subsequent deployment: - Updating and aligning firmware to meet requirements of qualification or site policy - Optimization of BIOS configuration to match workloads the node is expected to run - Management of machine-local storage, e.g configuring local RAID for optimal resilience or performance. Interfaces to Ironic are landing (or have landed)[4][5][6] which make many of these steps possible, but there's no easy way to either encapsulate the (currently mostly vendor specific) data associated with each step, or to coordinate sequencing of the steps. What is required is some tool to take a text definition of the required configuration, turn it into a correctly sequenced series of API calls to Ironic, expose any data associated with those API calls, and declare success or failure on completion. This is what Heat does. So the idea is to create some basic (contrib, disabled by default) Ironic heat resources, then explore the idea of orchestrating ready-state configuration via Heat. Given that Devananda and I have been banging heads over this for some time now, I'd like to get broader feedback of the idea, my interpretation of "ready state" applied to the tripleo undercloud, and any alternative implementation ideas. Thanks! Steve [1] https://review.openstack.org/#/c/104222/ [2] https://review.openstack.org/#/c/120778/ [3] http://robhirschfeld.com/2014/04/25/ready-state-infrastructure/ [4] https://blueprints.launchpad.net/ironic/+spec/drac-management-driver [5] https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt [6] https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev