We've got quite a few puppet nodes now (50-60, and increasing), and some have quite large catalogues. I've noticed that sometimes too many nodes attempt to check in at once that their puppet runs will time out or fail for other reasons.

Certainly if I kick off all nodes simultaneously using clusterssh then they will all fail.

The load average on the Puppetmaster is <1 and it has plenty of memory. The server that provides PuppetDB has a very low load average too and does not appear to be limited by resources, however we have no visibility of the Postgres backend, which is run by a different team who do not expose their monitoring.

I'd like to know how to increase the number of simultaneous runs the puppetmaster can handle because as we keep increasing the number of nodes in our network we wil surely run into problems.

Cheers,
Jonathan

--
You received this message because you are subscribed to the Google Groups "Puppet 
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to