On Tue, Oct 15, 2013 at 1:22 PM, <arkady_kanev...@dell.com> wrote:

> Thanks Victor.
> For HW enumeration bullet, does CB2.0 handles marking disks used for boot
> so they are not available for swift and cinder?
>

Nope, hence my bullet point about needing a framework for enumerating,
reserving, and releasing resources.


> Even thou CB-2.0 does not enumerate HW does it provide a utility for
> workload to verify that available node resource are sufficient for workload
> role on a node?
>

No, as that would require the same needed resource reservation framework.


> What is a logic to handle a role that spans multiple nodes (e.g. nova) in
> terms of "reserving" node resources as all nodes are checked before
> deployment?
>

There is none right now, hence my listing it as a needed short-term
architectural change.


> Do we deploy on a node and if other not successful role back, do 2 stage
> deployment (reserve, use), or something else?
>

No, for the same answers as the last 3 questions plus an extra bonus reason
that we do not have a demonstrated use case where rolling back (as opposed
to fixing whatever errors that caused a noderole deploy to fail) is the
right thing to do.


> Thanks,
> Arkady
>
> -----Original Message-----
> From: crowbar-bounces On Behalf Of Victor Lowther
> Sent: Tuesday, October 15, 2013 11:22 AM
> To: crowbar
> Subject: [Crowbar] Things to do for Crowbar 2.0
>
> At this point, Crowbar 2.0 is ready to start porting some of our non-core
> barclamps over from the 1.x codebase, and doing an initial cut of our first
> workload (Ceph).
>
> We are also ready to start bringing other folks up to speed on the Crowbar
> 2.0 codebase.  To help with that process (and not at all because I am
> lazy), I have left several things unfinished in the codebase:
>
> Tactical fixes:
>
> * Run.enqueue does not consolidate multiple requests to enqueue the same
>   noderole where it can, leading to Run doing more runs than is strictly
>   needed when updating the DNS and DHCP database roles.
> * When we are getting ready to boot into the ubuntu-12.04-install state,
>   we schedule a reboot for 60 seconds in the future and trust that the
>   rest of what we need to happen to set everything up in the provisioner
>   will happen in that 60 second window.  This will eventually lead to
>   too many reboots when the crowbar framework needs to change what a
>   node should boot to.  Instead of relying on arbitrarily chosen
>   timeout, we need to add an API and supporting code that tracks
>   bootenv changes in progress that the supporting code can use to tell
>   when a requested bootenv change has completed on the framework side.
> * The provisioner is hardwired to only support installing Ubuntu 12.04.
>   It needs to be refactored to support installing Redhat and Centos 6.4
>   as well.
> * Right now we do not do any sanity-checking on role and noderole data.
>   In CB1, we had kwalify schema for all of the templates that the
>   barclamps used, and we should write and utilize equivalent kwalify
>   schema for the role templates as well as all user and system data on
>   the noderoles.
>
>
> Short-term Architectual changes:
>
> * While we can handle hardware discovery (for RAID, BIOS, and IPMI)
>   fairly simply, We do not have an initial codepath that can accomidate
>   the flow controlthat we used to slot hardware changes (RAID, BIOS, and
>   IPMI) into. We need to enumerate the most workable methods of doing
>   this within the current constraints of the noderole graph and
>   determine a good first-pass solution for accomidating the requirements
>   of these barclamps.
> * We need a method of being able to enumerate and reserve limited
>   per-node hardware resources to enable letting roles make informed
>   noderole placement decisions. This should start with an API that
>   allows roles to enumerate, reserve, and release CPU cores, memory
>   shares, and whole logical or physical hard drives.
> * In CB1, the provisioner had (unexposed) support for running in online
>   mode, which allowed for node installation and workload provisioning
>   without having all of the prerequistes for doing so live on the admin
>   node.  The code for CB1 will not work on CB2, and will need some
>   rearchitecting to work with CB2.
>
>
> _______________________________________________
> Crowbar mailing list
> Crowbar@dell.com
> https://lists.us.dell.com/mailman/listinfo/crowbar
> For more information: http://crowbar.github.com/
>
_______________________________________________
Crowbar mailing list
Crowbar@dell.com
https://lists.us.dell.com/mailman/listinfo/crowbar
For more information: http://crowbar.github.com/

Reply via email to