Hi,
I really think the original question is very good: "why do you need to
compile all manifests again and again when there is no change on the
sources (files/ENC/whatever input)?"
Tricks like the proposed ones are clearly not the solution, and even if
the language is not prepared for that today, this is probably something
worth developing for the future. How many changes do you perform to the
system? I believe 95% of the compile time (probably closer to 99%)
produces exactly the same output again and again.
BR/Pablo
On 19/06/12 15:59, Brian Gallew wrote:
There actually is a way to do this, though you may find it to be more
painful to work with.
Imagine, if you will, two environments: production and maintenance.
The production environment is the one you're running right now, for
production. It fully manages everything and ensures that your systems
are all fully up-to-spec. It takes about 5 minutes for a full run of
this manifest.
The maintenance environment, on the other hand, manages /etc/passwd,
exported resources, and a couple critical resources that change
frequently. It doesn't check package versions, update
/etc/ssh/ssh_known_hosts, configure backup software, etc. It's main
purpose is to keep puppet running.
Once you have these two environment configured, you move the majority
of your hosts from "production" to "maintenance", and your puppet
runtime drops. When you make actual changes to the manifests, you
temporarily move all those hosts back into the production manifest so
they get applied, and then revert them to maintenance.
Another possibility for reducing overall CPU usage is to reduce the
number of times a day that Puppet runs. If you cut it back to twice
daily, then your total CPU usage goes from 120 minutes/host to 10
minutes/host. That is, in fact, how we run Puppet where I work,
though we do that out of a culture of a "no changes during production"
mindset rather than saving CPU cycles.
Finally, consider the actual reasons for your long run times. If it's
primarily that you are checksumming large file trees, you may want to
consider other alternatives. While Puppet is fabulous for templated
files, perhaps the bulk of those files could go into a
bzr/svn/git/hg/whatever repository? Then your manifest for that
directory is reduced to an exec{} for creating it, and either an
exec{} or perhaps a cron{} for running the appropriate update.
On Tue, Jun 19, 2012 at 6:38 AM, jcbollinger
<john.bollin...@stjude.org <mailto:john.bollin...@stjude.org>> wrote:
On Tuesday, June 19, 2012 5:23:42 AM UTC-5, Duncan wrote:
Hi folks, I'm scratching my head with a problem with system load.
When Puppet checks in every hour, runs through all our checks,
then exits having confirmed that everything is indeed as
expected, the vast majority of the time no changes are made.
But we still load our systems with this work every hour just
to make sure. Our current configuration isn't perhaps the
most streamlined, taking 5 minutes for a run.
The nature of our system, however, is highly virtualised with
hundreds of servers running on a handful of physical hosts.
It got me thinking about how to reduce the system load of
Puppet runs as much as possible. Especially when there may be
a move to outsource to virtualisation hosts who charge per CPU
usage (but that's a business decision, not mine).
Is there a prescribed method for reducing Puppet runs to only
be done when necessary? Running an md5sum comparison on a
file every hour isn't much CPU work, but can it be configured
so that Puppet runs are triggered by file changes? I know
inotify can help me here, but I was wondering if there's
anything already built-in?
You seem to be asking whether there's a way to make the Puppet
agent run to see whether it should run. Both "no, obviously not"
and "yes, it's automatic" can be construed as correct answers. In
a broader context, anything you run to perform the kind of
monitoring you suggest will consume CPU. You'd have to test to
see whether there was a net improvement.
Consider also that although file checksumming is one of the more
expensive operations Puppet performs, files are not the only
managed resources in most Puppet setups. You'll need to evaluate
whether it meets your needs to manage anything only when some file
changes.
There are things you can do to reduce Puppet's CPU usage,
however. Here are some of them:
* You can lengthen the interval between runs (more than you
already have done).
* You can apply a lighter-weight file checksum method (md5lite
or even mtime).
* You can employ schedules to reduce the frequency at which less
important resources are managed.
* You can minimize the number of resources managed on each node.
John
--
You received this message because you are subscribed to the Google
Groups "Puppet Users" group.
To view this discussion on the web visit
https://groups.google.com/d/msg/puppet-users/-/5UkHTsXNKIsJ.
To post to this group, send email to puppet-users@googlegroups.com
<mailto:puppet-users@googlegroups.com>.
To unsubscribe from this group, send email to
puppet-users+unsubscr...@googlegroups.com
<mailto:puppet-users%2bunsubscr...@googlegroups.com>.
For more options, visit this group at
http://groups.google.com/group/puppet-users?hl=en.
--
You received this message because you are subscribed to the Google
Groups "Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/puppet-users?hl=en.
--
You received this message because you are subscribed to the Google Groups "Puppet
Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/puppet-users?hl=en.