[email protected] wrote:
On Tue, 20 Mar 2012, Miles Fidelman wrote:
Tom Perrine wrote:
I've used both cfengine and Puppet over the years, and have met people
who have been successful with Chef. What has worked for others in
their specific environment may (or may not) work well for yours.
What I keep coming back to is that they seem like overkill for what
I'm doing right now - and carry with them the need to:
- install a layer of infrastructure
- work in yet another language (bash and sed do pretty much all I
need, why have to wade through recipes written in Ruby or Python or
whatever)?
- do I really need to deal with lots of recipes to ultimately execute
a few install and config. statements (who really needs the overhead
to execute "apt-get install" or "./configure; ./make; ./make test,
./make install") - at least for the stuff I'm managing now, the
details are in site-specific configurations, and it seems like a lot
of getting that right with Chef (or whatever) involves wading into
and understanding how the individual recipes work (or writing them)
Right now, I'm managing a small high-availability cluster
(essentially a mini-cloud) with one VM for email and list handling,
another for managing backups, and several for development. Where I
expect that something like chef, or puppet, or whatever will be
helpful is in about 6 months when we might be worrying about putting
some code into production, and (fingers crossed) scaling services
across multiple VMs, some of which might be running on AWS or
elsewhere in the cloud. Right now, though, it's more about
documenting and streamlining management of the small cluster -
partially to simplify handoff to someone else to administer,
partially to simplify disaster recovery or hardware migration.
If you know you wnt to automate a bigger cluster later, take the time
in development with your small cluster to learn the tool and work
through the configuration.
Well sure, but I see these as different use cases, with different
problems, and probably different tools.
I NEVER expect to replicate our development or administrative
environments to any serious scale - these are going to remain fairly
customized, and in the case of development there are likely to be lots
of "throw away" VMs as we experiment with different plumbing and
experimentation. Cleaning up processes yes (to simplify disaster
recovery), lots of automation doesn't seem to apply. If I were starting
from scratch, I'd probably start with Ganetti (a really nice tool from
Google that they use to manage small, internal clusters, used for
administrative applications - though it has some subtleties that get in
the way of doing a completely automated failover). As I'm not starting
from scratch, I just want to get existing shell scripts under
configuration control, and automate manual steps where that's easy
(mostly re. IP address and DNS record managment when creating new
virtual machines).
For production, on the other hand, if we're fortunate, the focus will be
on deploying lots of cookie cutter nodes - each with a fairly
constrained environment - and that's where the combination of an
orchestration tool + a configuration engine seems to offer a lot of
leverage. And, yes, I expect we'll want to wire together a small
cluster first, and get the tooling right before scaling out. It's just
that I don't expect this cluster, or associated procedures, to look
anything like our inward facing stuff.
Thanks,
Miles
--
In theory, there is no difference between theory and practice.
In practice, there is. .... Yogi Berra
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
http://lopsa.org/