Hi, 

Could anyone point out a good existing discussion of Puppet scalability?
I'm relating to the Puppet master and ecosystem parts that are not the 
actual agents sitting on the managed servers.

In particular anything that would shed insight upon:

   1. Does a Puppet master gracefully degrade when overwhelmed?
   Or does stuff start failing, instead of just performing more slowly.
   2. How does changing the Puppet polling interval (runinterval etc.) factor 
   in?
   
   Does Puppet make it safe to increase workload and polling frequency 
   knowing that at worst there'll be slowness, or does it leave it to the 
   operator's gut and trial and error figuring how much load is fine, 
   requiring of them to throttle workloads and hold their breaths when rolling 
   out changes. Can workload be cancelled in case of excessive loads same as 
   handling parallel FTP jobs, or this approach is not a design tenet at 
   present? 
   I'm assuming the answers in this area are not 0 nor 1's, so a balanced 
   discussion of how close are things to 0 or to 1 is of interest.

Thanks,
matan

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/bNuzUMUUxJIJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to