Your Puppet master has twice the CPU of ours; but more importantly, you have 
far simpler manifests.  Ours are very complex, and can take 20seconds on 
average to build - some taking a minute for the whole process to finish.

We're going to completely redesign our setup, as per the instructionsin the 
ProPuppet book, with multiple puppetmasters in a cluster behind a load balancer 
so that we can expand indefinitely.

Steve

Steve Shipway
University of Auckland ITS
UNIX Systems Design Lead
s.ship...@auckland.ac.nz<mailto:s.ship...@auckland.ac.nz>
Ph: +64 9 373 7599 ext 86487

________________________________
From: puppet-users@googlegroups.com [puppet-users@googlegroups.com] on behalf 
of Luke Bigum [luke.bi...@lmax.com]
Sent: Thursday, 23 February 2012 9:56 p.m.
To: puppet-users@googlegroups.com
Subject: Re: [Puppet Users] RE: enterprise puppet architecture

On 23/02/12 07:26, Steve Shipway wrote:
Our Puppet system here is currently managing about 500 nodes.  We anticipate 
about 1000 eventually.

I have had to reduce the client frequency to once every 4 hours; it seems that 
the maximum that can be handled by a single (dual-CPU, 8GB) puppet master is 
200 nodes.  After that, performance drops quickly and I notice many failed 
manifests.  This is with Puppet 2.7.10 on the master.


Hi Steve,

Excuse the slight change in topic but I'm interested in the performance stats 
you posted. I run Puppet 2.7.5 on a 4 CPU 4 GiB RAM KVM virtual machine. I use 
Puppet Commander to evenly distribute runs and my interval time works out to be 
around 15 minutes for 230 odd hosts, as per the timestamps between MCollective 
discoveries below:

[root@gs2puppet01 ~]# grep Found /var/log/puppetcommander.log | tail
I, [2012-02-23T06:46:12.218853 #28284]  INFO -- : Found 231 puppet nodes, 
sleeping for ~3 seconds between runs
I, [2012-02-23T06:57:59.009689 #28284]  INFO -- : Found 231 puppet nodes, 
sleeping for ~3 seconds between runs
I, [2012-02-23T07:09:49.237810 #28284]  INFO -- : Found 231 puppet nodes, 
sleeping for ~3 seconds between runs
I, [2012-02-23T07:21:39.435558 #28284]  INFO -- : Found 231 puppet nodes, 
sleeping for ~3 seconds between runs
I, [2012-02-23T07:33:26.554525 #28284]  INFO -- : Found 231 puppet nodes, 
sleeping for ~3 seconds between runs
I, [2012-02-23T07:45:59.550541 #28284]  INFO -- : Found 231 puppet nodes, 
sleeping for ~3 seconds between runs
I, [2012-02-23T07:57:51.013245 #28284]  INFO -- : Found 231 puppet nodes, 
sleeping for ~3 seconds between runs
I, [2012-02-23T08:12:10.915308 #28284]  INFO -- : Found 231 puppet nodes, 
sleeping for ~3 seconds between runs
I, [2012-02-23T08:24:16.383794 #28284]  INFO -- : Found 231 puppet nodes, 
sleeping for ~3 seconds between runs
I, [2012-02-23T08:37:03.750438 #28284]  INFO -- : Found 231 puppet nodes, 
sleeping for ~3 seconds between runs

I allow 10 Agents to run concurrently however my catalogs are very very light, 
less than a second to compile:

[root@gs2puppet01 ~]# grep 'Compiled catalog' /var/log/messages | awk 
'{sum+=$14} END {print sum/NR}'
0.750115

How big are your Puppet manifests so that you've had to drop the run time down 
to 4 hours? Have you considered the use of MCollective and Puppet Commander to 
spread your load out more?

-Luke

We've bought a copy of ProPuppet (as  Jeff Watts recommended) and we're 
planning to make a distributed system as instructed in there -- one puppet 
dashboard/report server, multiple puppet master servers, and one dev server.  
Puppet configurations held is subversion and synchronised on all puppet 
masters, which would themselves be behind a load balancer.  This is still in 
the planning stage, though.

I'd be interested in hearing your experiences in managing your extra-large 
system; I can also share our experiences in how we implemented and manage 
control of this system, if you'd like to contact me off-list.  When we first 
implemented, we engaged a Puppet Labs consultant for a few days to help with 
the initial work.  I can definitely recommend doing this if you've no puppet 
experience, as one place Puppet lacks is documentation!

Steve

Steve Shipway
University of Auckland ITS
UNIX Systems Design Lead
s.ship...@auckland.ac.nz<mailto:s.ship...@auckland.ac.nz>
Ph: +64 9 373 7599 ext 86487

--
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to 
puppet-users@googlegroups.com<mailto:puppet-users@googlegroups.com>.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com<mailto:puppet-users+unsubscr...@googlegroups.com>.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



--
Luke Bigum

Information Systems
luke.bi...@lmax.com<mailto:luke.bi...@lmax.com> | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN

The information in this e-mail and any attachment is confidential and is 
intended only for the named recipient(s). The e-mail may not be disclosed or 
used by any person other than the addressee, nor may it be copied in any way. 
If you are not a named recipient please notify the sender immediately and 
delete any copies of this message. Any unauthorized copying, disclosure or 
distribution of the material in this e-mail is strictly forbidden. Any view or 
opinions presented are solely those of the author and do not necessarily 
represent those of the company.

--
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to