Hey Jay,

With regard to the lxc library for ansible, we do have an upstream pull request 
to get lxc in as an “extra” module: 
"https://github.com/ansible/ansible-modules-extras/pull/123”. The upstream 
module has been reworked from what we have within our current library; it was 
changed to be more declarative per the request of the greater ansible 
community. If/when the module gets accepted upstream we’ll need to do a bit of 
refactoring to update the roles so that they can leverage the lxc-container 
module but it wont be a big overhaul.

Presently we have a lot of Rax-isims in the code base though I am working 
through them now and seeing what would need to change to make them more 
community acceptable. Since we’re talking about it I’ll float the idea here, 
I’m thinking that we’ll rename rpc_.* to opc_.* or something similar which 
denotes Openstack Private Cloud. I thought about simply using ”os_.*” 
originally but then came to the conclusion that I hate marking things “os” as 
it, in my mind, denotes operating system and not openstack, so im open to 
suggestions on the variable name renaming. Relevent blueprint: 
"https://blueprints.launchpad.net/openstack-ansible/+spec/rackspace-namesake”. 
I spec’d out a blueprint for converting the roles into galaxy compatible roles 
as well as cleaning up the container inventory, the two will go hand in hand. 
The work there will be a bit of a forklift but should “genericize” most of what 
we have so far while keeping compatibility with people that had already been 
running the stack.

As for patches, specs, recommendations, and review if you have them we want 
them. As it stands I very much want this to be a community project and not just 
something that came out of a pidgin brawl and was hurled over the fence; the 
more eyes on the code the better :)

Per the topic of having more repos/submodules, I think that breaking out the 
roles into submodules makes a lot of sense and is inline with my desire to have 
the roles available on ansible-galaxy. The single monolithic repository is a 
bit much to get your head around. That said, I think that from the prospective 
of our present priorities we’d need to do most, if not all, of the repo clean 
up done before breaking out the roles into submodules. However, if this is 
something that people want to work on lets spec'd prioritized.

Thanks again for the feedback I really appreciate it.

—

Kevin



> On Dec 11, 2014, at 09:18, Jay Pipes <jaypi...@gmail.com> wrote:
> 
> Hi Kevin! Great initiative. Some comments inline...
> 
> On 12/10/2014 05:16 PM, Kevin Carter wrote:
>> Hello all,
>> 
>> 
>> The RCBOPS team at Rackspace has developed a repository of Ansible
>> roles, playbooks, scripts, and libraries to deploy Openstack inside
>> containers for production use. We’ve been running this deployment for
>> a while now, and at the last OpenStack summit we discussed moving the
>> repo into Stackforge as a community project. Today, I’m happy to
>> announce that the "os-ansible-deployment" repo is online within
>> Stackforge. This project is a work in progress and we welcome anyone
>> who’s interested in contributing.
> 
> Great to see this announcement. I saw this code a few days ago and git 
> clone'd it locally. I have to say, there's a lot of great stuff in there. 
> It's pretty well documented as well. Nice work.
> 
>> This project includes: * Ansible playbooks for deployment and
>> orchestration of infrastructure resources. * Isolation of services
>> using LXC containers. * Software deployed from source using python
>> wheels.
> 
> So I did have a few things to bring up that I would love to get feedback on.
> 
> 1) LXC container-management library
> 
> I looked at the Python Ansible module code and was impressed at its 
> completeness. Do you have plans to submit this to ansible-galaxy and/or 
> working to get this into the main Ansible module library in the same way that 
> the Docker management library is already included? Just curious here...
> 
> 2) The "RPC" stuff
> 
> Do you have plans to cull the RPC (Rackspace Private Cloud) specific 
> variables and things from the repo to make it more generic? One particular 
> thing that would be useful to "genericize" is the reliance on specifically 
> named networks and bridges. Would you be open to patches to make these things 
> templatized?
> 
> 3) The "all in one repo" design
> 
> So, this is actually my only real negative feedback -- and by negative I mean 
> constructive not bashing! :)
> 
> One of the things I really like about the DebOps Ansible tooling:
> 
> https://github.com/debops/
> 
> Is the way those guys have structured things to break out playbooks for 
> different components into separate repositories. This allows users to swap in 
> and out various preferences of deploying infrastructure. It also allows one 
> to *entirely* separate the inventory files which declaratively describe the 
> deployment environment and its configuration switches from the playbooks that 
> deploy things.
> 
> This is one of the reasons that in the Chef world, we have a 
> stackforge/openstack-chef-repo reference repository that contains the 
> description of the environment, and we have separate cookbooks like 
> stackforge/cookbook-openstack-compute that contain the instructions for 
> installing Nova and its dependencies.
> 
> I find this "separate the config from the instructions" approach to be more 
> flexible and easier for folks to wrap their heads around, even if it does 
> mean more Git repositories.
> 
> What thoughts do you have on this?
> 
> All the best, and thanks again for introducing your work!
> -jay
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to