On Tue, Jan 22, 2013 at 3:04 PM, Ken Barber <k...@puppetlabs.com> wrote: >> This sounds like a sensible workaround, I will definitely have a look. I >> haven't yet had enough time to look at the issue properly, but it seems that >> this very long time is indeed consumed by catalog construction. Puppetdb >> fails after this is finished, so it seems that it dies when nagios host >> tries to report its catalog back. > > Do you mean it dies from an OOM when it tries to report the catalogue back?
Yes, that's what it looks like. Of course I can prevent it by giving it more memory (which I did), but I already have postgres backed puppetdb and had to give puppetdb 3GB, or a puppet agent run on a single host (OK, with thousands of exported resources to collect and process) that takes about 70 minutes can still kill it. This waiting 70 minutes for it to die is an insult to injury... Overall not great. I'm happy to redo this setup if I'm doing something wrong, but it just seems like this is exponential (30-odd nodes - 2 minutes, 100-odd nodes, 70 minutes). Regards, Daniel -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.