How many resources are we talking for the 100 nodes?

If you're running puppetdb one way to do this is to get a report on
the nagios server that does the collection:

curl -G -H 'Accept: application/json' 'http://puppetdb:8080/resources'
--data-urlencode 'query=["=", ["node", "name"], "nagiosserver"]' | p
resource | wc -l

(this at least works on Puppetdb 1.0.5 - as JSON output is pretty-printed)

It would be interesting to see if this is an artifact of collection or
the nagios resource specifically, although the only good way I know to
do this is to perhaps replace the nagios exported resource cases with
something simple - such as a noop style resource like 'whit' or
'anchor'.

When you run puppet agent -t ... if you do a summarize what is the
result? You should get something like this:

# puppet agent -t --summarize
info: Caching catalog for puppetdb1.vm
info: Applying configuration version '1358813527'
notice: exported
notice: /Stage[main]//Node[puppetdb1.vm]/Notify[exported]/message:
defined 'message' as 'exported'
notice: asdfasdfsasdf
notice: /Stage[main]//Node[puppetdb1.vm]/Notify[foo]/message: defined
'message' as 'asdfasdfsasdf'
notice: Finished catalog run in 0.03 seconds
Changes:
            Total: 2
Events:
            Total: 2
          Success: 2
Resources:
          Changed: 2
      Out of sync: 2
          Skipped: 6
            Total: 9
Time:
       Filebucket: 0.00
           Notify: 0.00
   Config retrieval: 0.90
            Total: 0.90
         Last run: 1358813994
Version:
           Config: 1358813527
           Puppet: 2.7.18

I think Luke's suggestion about tapping the information in puppetdb
using functions is not a bad work-around however, but its
disappointing that exported resources in this case isn't just
_working_ :-(.

ken.

On Tue, Jan 22, 2013 at 3:19 PM, Daniel Siechniewicz
<dan...@nulldowntime.com> wrote:
> On Tue, Jan 22, 2013 at 3:04 PM, Ken Barber <k...@puppetlabs.com> wrote:
>>> This sounds like a sensible workaround, I will definitely have a look. I
>>> haven't yet had enough time to look at the issue properly, but it seems that
>>> this very long time is indeed consumed by catalog construction. Puppetdb
>>> fails after this is finished, so it seems that it dies when nagios host
>>> tries to report its catalog back.
>>
>> Do you mean it dies from an OOM when it tries to report the catalogue back?
>
> Yes, that's what it looks like. Of course I can prevent it by giving
> it more memory (which I did), but I already have postgres backed
> puppetdb and had to give puppetdb 3GB, or a puppet agent run on a
> single host (OK, with thousands of exported resources to collect and
> process) that takes about 70 minutes can still kill it. This waiting
> 70 minutes for it to die is an insult to injury... Overall not great.
> I'm happy to redo this setup if I'm doing something wrong, but it just
> seems like this is exponential (30-odd nodes - 2 minutes, 100-odd
> nodes, 70 minutes).
>
> Regards,
> Daniel
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Puppet Users" group.
> To post to this group, send email to puppet-users@googlegroups.com.
> To unsubscribe from this group, send email to 
> puppet-users+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/puppet-users?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to