Am 11.01.2015 um 13:25 schrieb Rich Freeman:
> On Sun, Jan 11, 2015 at 3:22 AM, Alan McKinnon <alan.mckin...@gmail.com> 
> wrote:
>> The reason I'm recommending to keep all of /etc in it's own repo is that
>> it's the simplest way to do it. /etc/ is a large mixture of
>> ansible-controlled files, sysadmin-controlled files, and other arbitrary
>> files installed by the package manager. It's also not very big, around
>> 10M or so typically. So you *could* manually add to a repo every file
>> you change manually, but that is error-prone and easy to forget. Simpler
>> to just commit everything in /etc which gives you an independant record
>> of all changes over time. Have you ever dealt with a compliance auditor?
>> An independant change record that is separate from the CM itself is a
>> feature that those fellows really like a lot.
> 
> If you're taking care of individual long-lived hosts this probably
> isn't a bad idea.
> 
> If you just build a new host anytime you do updates and destroy the
> old one then obviously a git repo in /etc won't get you far.

I have long-lived hosts out there and with rather individual setups and
a wide range of age (= deployed over many years).

So my first goal is kind of getting an overview:

* what boxes am I responsible for?

* getting some kind of meta-info into my local systems -> /etc, @world,
and maybe something like the facts provided by "facter" module (a nice
kind of profile ... with stuff like MAC addresses and other essential
info)  ... [1]

and then, as I learn my steps, I can roll out some homogenization:

* my ssh-keys really *everywhere*
* standardize things for each customers site (network setup, proxies)

etc etc

I am just cautious: rolling out standardized configs over dozens of
maybe different servers is a bit of a risk. But I think this will come
step by step ... new servers get the roles applied from the start, and
existing ones are maybe adapted to this when I do other update work.

And at keeping /etc in git:

So far I made it a habit to do that on customer servers. Keeping track
of changes is a good thing and helpful. I still wonder how to centralize
this as I would like to have these, let's call them "profiles" in my own
LAN as well. People tend to forget their backups etc ... I feel better
with a copy locally.

This leads to finding a structure of managing this.

The /etc-git-repos so far are local to the customer servers.
Sure, I can add remote repos and use ansible to push the content up there.

One remote-repo per server-machine? I want to run these remote-repos on
one of my inhouse-servers ...

For now I wrote a small playbook that allows me to rsync /etc and
world-file from all the Gentoo-boxes out there (and only /etc from
firewalls and other non-gentoo-machines).

As mentioned I don't have FQDNs for all hosts and this leads to the
problem that there are several lines like "ipfire" in several groups.

Rsyncing stuff into a path containing the hostname leads to conflicts:

        - name: "sync /etc from remote host to inventory host"
          synchronize: |
                  mode=pull
                  src=/etc
                  dest={{ local_storage_path }}/"{{ inventory_hostname
}}"/etc
                  delete=yes
                  recursive=yes


So I assume I should just setup some kind of talking names like:

[smith]
ipfire_smith ....

[brown]
ipfire_brown ....

... and use these just as "labels" ?

Another idea is to generate some kind of UUID for each host and use that?

----

I really like the ansible-approach so far.

Even when I might not yet run the full standardized approach as I have
to slowly get the existing hosts into this growing setup.

Stefan


[1]  I haven't yet managed to store the output of the setup-module to
the inventory host. I could run "ansible -i hosts.yml -m setup all" but
I want a named txt-file per host in a separate subdir ...

Reply via email to