At this point, Crowbar 2.0 is ready to start porting some of our non-core barclamps over from the 1.x codebase, and doing an initial cut of our first workload (Ceph).
We are also ready to start bringing other folks up to speed on the Crowbar 2.0 codebase. To help with that process (and not at all because I am lazy), I have left several things unfinished in the codebase: Tactical fixes: * Run.enqueue does not consolidate multiple requests to enqueue the same noderole where it can, leading to Run doing more runs than is strictly needed when updating the DNS and DHCP database roles. * When we are getting ready to boot into the ubuntu-12.04-install state, we schedule a reboot for 60 seconds in the future and trust that the rest of what we need to happen to set everything up in the provisioner will happen in that 60 second window. This will eventually lead to too many reboots when the crowbar framework needs to change what a node should boot to. Instead of relying on arbitrarily chosen timeout, we need to add an API and supporting code that tracks bootenv changes in progress that the supporting code can use to tell when a requested bootenv change has completed on the framework side. * The provisioner is hardwired to only support installing Ubuntu 12.04. It needs to be refactored to support installing Redhat and Centos 6.4 as well. * Right now we do not do any sanity-checking on role and noderole data. In CB1, we had kwalify schema for all of the templates that the barclamps used, and we should write and utilize equivalent kwalify schema for the role templates as well as all user and system data on the noderoles. Short-term Architectual changes: * While we can handle hardware discovery (for RAID, BIOS, and IPMI) fairly simply, We do not have an initial codepath that can accomidate the flow controlthat we used to slot hardware changes (RAID, BIOS, and IPMI) into. We need to enumerate the most workable methods of doing this within the current constraints of the noderole graph and determine a good first-pass solution for accomidating the requirements of these barclamps. * We need a method of being able to enumerate and reserve limited per-node hardware resources to enable letting roles make informed noderole placement decisions. This should start with an API that allows roles to enumerate, reserve, and release CPU cores, memory shares, and whole logical or physical hard drives. * In CB1, the provisioner had (unexposed) support for running in online mode, which allowed for node installation and workload provisioning without having all of the prerequistes for doing so live on the admin node. The code for CB1 will not work on CB2, and will need some rearchitecting to work with CB2. _______________________________________________ Crowbar mailing list Crowbar@dell.com https://lists.us.dell.com/mailman/listinfo/crowbar For more information: http://crowbar.github.com/