Thanks to all who responded. A few follow-up questions and comments if
you don't mind. For brevity's sake, I've consolidated stuff from
multiple responses here:
[email protected] wrote:
You name it, it's probably in use in a very large environment.
at $work we have hundreds of production systems serving over ten
million users (real users, not freebe logins) where the majority of
the real configuation is logging into the server and running vi on a
file. A couple of years ago I got a report from 302 production systems
as to what packages were installed, I did a count of the number of
packages on each system and came up with 149 different package counts
among the systems (and this was with ~100 of the systems being identical)
we have other places where there are hundreds of systems configured
entirely through automated tools where we are utterly confident that
all of the systems are running identical software, even though these
are organized into >100 different farms, each with different
configurations, connected to different network (in-house tools, dating
back 10+ years in an environment that cfengine was not a valid choice
for), and we have other areas where the company has spent millions on
commercial automation tools.
Sort of what I expect is typical.
I think the one constant is that nobody is completely satisfied with
what they have and knows things that they would like to do to improve
things (either that, or they are completely ignorant of automation and
are wishing that things were easier to configure, which boils down to
the same thing)
:-)
I would say that the key is not to try and do everything at once. I
would go through a progression along the lines of the following.
1. start off by managing installed software and patches
2. move from there to making sure configs were synced between
appropriate boxes (especially primary and backup where you have them)
3. then get user management under control
4. then work to publish/track your config files from a central master
(even if they are manually edited on that central master)
5. then work to eliminate the manual editing.
Pretty much ok on 1-3, 4 and 5 are what I'm focusing on right now.
Right now, most of my build and config takes the form of fairly simple
checklists, containing steps like:
- apt-get install <foo> (or: download; untar; ./configure, ./make,
./make test, ./make install)
- look up some config info (e.g., pull an unused IP address off a checklist)
- add a domain record to a zone file
- edit one or more config files
- ,/init.d/<foo> start
other than editing config config files, pretty much everything consists
of one-line shell commands - easy enough to stick into a bash script;
and I guess a lot of the configuration could be done by adding sed
commands to the script
thinking more and more that rundeck (or something like it) would make it
easy to manage and execute those scripts (the other thing that looks
pretty interesting is a little project called sm-framework
(https://sm.beginrescueend.com/presentations/SM-Framework-Presentation.pdf)
- essentially adds some glue to the zsh and treats scripts like plug-ins
Tom Perrine wrote:
I've used both cfengine and Puppet over the years, and have met people
who have been successful with Chef. What has worked for others in
their specific environment may (or may not) work well for yours.
What I keep coming back to is that they seem like overkill for what I'm
doing right now - and carry with them the need to:
- install a layer of infrastructure
- work in yet another language (bash and sed do pretty much all I need,
why have to wade through recipes written in Ruby or Python or whatever)?
- do I really need to deal with lots of recipes to ultimately execute a
few install and config. statements (who really needs the overhead to
execute "apt-get install" or "./configure; ./make; ./make test, ./make
install") - at least for the stuff I'm managing now, the details are in
site-specific configurations, and it seems like a lot of getting that
right with Chef (or whatever) involves wading into and understanding how
the individual recipes work (or writing them)
Right now, I'm managing a small high-availability cluster (essentially a
mini-cloud) with one VM for email and list handling, another for
managing backups, and several for development. Where I expect that
something like chef, or puppet, or whatever will be helpful is in about
6 months when we might be worrying about putting some code into
production, and (fingers crossed) scaling services across multiple VMs,
some of which might be running on AWS or elsewhere in the cloud. Right
now, though, it's more about documenting and streamlining management of
the small cluster - partially to simplify handoff to someone else to
administer, partially to simplify disaster recovery or hardware migration.
That's what offers the excuse for the "one true way" arguments.
Rule 16 and 36 apply:
(http://thuktun.wordpress.com/toms-rules-for-happy-sysadmins-and-users/)
:-)
I'd start with this list
(http://en.wikipedia.org/wiki/Comparison_of_open_source_configuration_management_software
) and evaluate based on a few criteria:
Thanks for the pointer. The list seems to have grown a lot, since I
last looked (maybe 2 years ago) - looks like a pretty good starting point.
1. Support for the way you manage your site today. Think about whether
or not you have a CMDB or inventory system, or you want your CM tool
to provide it. Think about what OSes you need to support: If you need
Windows (and Mac, and UNIX) support your choices may be more limited.
Think about the things you like about the way you manage now (and want
to keep) and the things you want to change. See which tools are more
like the way you want things to be in the future, not which ones
exactly match what you're doing now.
This is actually what I see as most useful. I find that the most time
consuming part of setting up a VM involves creating logical volumes,
assigning IP addresses (both for external access and for pairing Disk
Mirrors and failover instances - DRBD and pacemaker are a pain in this
regard), generating DNS records, and such. Right now this is all
manual - a spreadsheet and editing config files. A simple CMDB, and
hooks that can be accessed from shell scripts would go a LONG way to
simplify my life.
Suggestions along these lines would be most welcomed.
The rest of your points are well taken - thanks!
Anton Cohen wrote:
Excessive Kool-Aid is most likely clouding my vision, but the way I
see it everyone is a) drinking the Kool-Aid, b) wishing they had some
tasty Kool-Aid to drink, or c) too drunk on stale beer to know
Kool-Aid tastes way better.
Personally, I prefer the electric variety :-)
To sum up your requirements: You want a cross-platform tool with
a library of functions, that executes scripts on remote hosts over a
secure protocol, and has access to variables in a database. You want
to access the tool from CLI, GUI, and REST API. You want the scripts
to be version controlled.
What you described /is/ Configuration Management, like Puppet and
Chef. Really, what you asked for exists, except they communicate over
HTTPS instead of SSH, and are written in Ruby instead of Tcl.
The dependency requirements aren't that bad, at least for Puppet and
CFEngine. They can be satisfied from the standard repos on
Debian-based hosts and standard repos + EPEL on Red Hat-based hosts.
Well, I'm more specifically interested in getting bash scripts under
control, not scripts written in Ruby or Python. Not really looking to
either rewrite existing setup stuff, or to buy off on having to wade
through chef cookbooks (or the puppet or whatever equivalent) to find,
understand, and configure scripts to suit my environment.
Hence, I've been zeroing in more on the combination of an orchestration
tool (e.g., rundeck), a version control tool (probably git), and some
kind of simple CMDB to track IP addresses, DNS records, and such ---
been doing a lot of my own evaluation, but wondering what others have
done in this regard, and/or suggestions for specific combinations of
orchestration+version_control+cmdb tools that are easy to wire together.
(Easy integration w/ FAI, for initial system build would be a plus).
(Wait, I just re-read your email... did you say Perl + CPAN is nice?
That's like saying getting punched repeatedly in the face, while your
hands are tied behind your back is nice.)
My major production system right now is Sympa (a really nice open source
email list manager, built by a consortium of French universities - sort
of the step up from Mailman if you manage a lot of lists with common
users). It's written in perl (say what you will about perl, but it's a
pretty effective language for text processing, a respectable design
choice, particularly for the time, and it's very well written perl).
The thing that impresses me is how nicely the package builds from source
(I've yet to find the Debian packaging to be either up-to-date or
stable). Build is a simple, ./configure; make; make install - and in
the process it uses CPAN to install/update all kinds of perl modules -
all nice, clean, and automatic - and it all fits in the context of the
gnu build tools.
So, yes, in line with automating installations and configuration -
Perl+CPAN is a pretty good example of something that "just works."
In the case of Sympa, the install automation also creates a user account
for sympa to run under, creates and populates a mysql database, installs
mail aliases, creates/updates configuration files, populates rc.d, and
starts the sevice - pretty much everything on autopilot. All with
"./configure; make; make install" - there are something like three
questions to answer during the install, notably an admin
username/password - and those can be pre-seeded, or provided on the
command line.
All in all, a very impressive example of automatic installation and
configuration. Possibly the best I've seen.
Thanks Again to All,
Miles
--
In theory, there is no difference between theory and practice.
In practice, there is. .... Yogi Berra
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
http://lopsa.org/