Chris,
Thanks for the detailed response.  BTW did you deploy HA, load balancing,
etc on the Postgres side?

Cheers

On Thu, Jun 25, 2015 at 11:11 PM, Christopher Wood <
christopher_w...@pobox.com> wrote:

> Somewhere past 700 nodes (still puppetizing) our 1-core, 2GB-RAM
> puppetmasters and 2-core, 2GB-RAM puppetdb host started showing signs of
> overload (ssl/connect errors in agent logs, catalog/report mismatches in
> puppetdb). I augmented the VMs with "hardware" to stop the complaints and
> later on went off tuning. I moved the puppetmasters up to 4-core 8GB-RAM
> and the puppetdb host is now 4-core 16GB-RAM. Definitely rattling around in
> them now but there's lots of room for growth.
>
> For scaling/tuning, among others:
>
>
> https://ask.puppetlabs.com/question/13433/how-should-i-tune-passenger-to-run-puppet/
>
> https://docs.puppetlabs.com/puppetdb/latest/scaling_recommendations.html
>
> https://docs.puppetlabs.com/guides/scaling.html
>
> http://activemq.apache.org/scaling-queues.html
>
> http://activemq.apache.org/javalangoutofmemory.html
>
> After all that I analyzed catalogs across the deployment and found that
> the datacat usage in the mcollective module (now
> https://github.com/puppet-community/puppet-mcollective) was an abominable
> percentage of the total number of resources. The firewall type (
> https://github.com/puppetlabs/puppetlabs-firewall) was 3% of the total
> resources. Since it takes less horsepower to puppet up fewer things I
> figure there will be a benefit in some judicious refactoring here and
> there. (Templates instead of file_line, iptables config instead of firewall
> resources, et cetera.)
>
> However, I figure there's a benefit in a conversion effort to cram things
> into puppet first and sort them out later. I had a good time just throwing
> hardware at the problem to start and then tuning after the bulk of hosts
> were converted to puppet management. People at companies where incremental
> hardware use is expensive may want to tune early and shrink manifests more
> aggressively.
>
> On Thu, Jun 25, 2015 at 08:16:31PM -0400, Tom Tucker wrote:
> >    Assuming 2,500 Linux clients running Puppet community edition 3.8.
> Any
> >    sizing recommendation for a PuppetDB system in regards to disk size
> for
> >    the DB, CPU, Memory, etc.
> >    Thank you for your time and feedback.
> >
> >    --
> >    You received this message because you are subscribed to the Google
> Groups
> >    "Puppet Users" group.
> >    To unsubscribe from this group and stop receiving emails from it,
> send an
> >    email to [1]puppet-users+unsubscr...@googlegroups.com.
> >    To view this discussion on the web visit
> >    [2]
> https://groups.google.com/d/msgid/puppet-users/CAGymF1CQpBBFRZ2VO5x_e5XEt2sxF6Zpe_cT%2BZUaT5NbO%2BcEYA%40mail.gmail.com
> .
> >    For more options, visit [3]https://groups.google.com/d/optout.
> >
> > References
> >
> >    Visible links
> >    1. mailto:puppet-users+unsubscr...@googlegroups.com
> >    2.
> https://groups.google.com/d/msgid/puppet-users/CAGymF1CQpBBFRZ2VO5x_e5XEt2sxF6Zpe_cT%2BZUaT5NbO%2BcEYA%40mail.gmail.com?utm_medium=email&utm_source=footer
> >    3. https://groups.google.com/d/optout
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/20150626031113.GA832%40iniquitous.heresiarch.ca
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CAGymF1CXG4cfFfFDEv4vRkko558un3Duo89LpqTTaHd1VmkQRQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to