As a datapoint, this exact config (with mongrel_cluster) was working great
under 0.25.x. With fewer, slower cpus, slower storage (vm image files) and
2G of ram...

I gave puppet-load a try, but it is throwing errors that I don't have time
to dig into today:
debug: reading facts from: puppet.foo.com.yaml
/var/lib/gems/1.8/gems/em-http-request-0.2.15/lib/em-http/request.rb:72:in
`send_request': uninitialized constant EventMachine::ConnectionError
(NameError)
    from
/var/lib/gems/1.8/gems/em-http-request-0.2.15/lib/em-http/request.rb:59:in
`setup_request'
    from
/var/lib/gems/1.8/gems/em-http-request-0.2.15/lib/em-http/request.rb:49:in
`get'
    from ./puppet-load.rb:272:in `spawn_request'
    from ./puppet-load.rb:334:in `spawn'

Running about 250 nodes, every 30 minutes.

On Wed, Dec 15, 2010 at 2:43 PM, Brice Figureau <
brice-pup...@daysofwonder.com> wrote:

> On 15/12/10 20:24, Disconnect wrote:
> > On Wed, Dec 15, 2010 at 2:14 PM, Brice Figureau
> > <brice-pup...@daysofwonder.com <mailto:brice-pup...@daysofwonder.com>>
> > wrote:
> >>
> >> Note: we were talking about the puppet master taking 100% CPU, but
> >> you're apparently looking to the puppet agent, which is a different
> story.
> >>
> >
> > The agent isn't taking cpu, it is hanging waiting for the master to do
> > anything. (The run I quoted earlier eventually ended with a timeout..)
> > The master has pegged the cpus, and it seems to be related to file
> > resources:
>
> Oh, I see.
>
> > $ ps auxw|grep master
> > puppet   31392 74.4  4.7 361720 244348 ?       R    10:42 162:06 Rack:
> > /usr/share/puppet/rack/puppetmasterd
> >
> > puppet   31396 70.0  4.9 369524 250200 ?       R    10:42 152:32 Rack:
> > /usr/share/puppet/rack/puppetmasterd
> >
> > puppet   31398 66.2  3.9 318828 199472 ?       R    10:42 144:10 Rack:
> > /usr/share/puppet/rack/puppetmasterd
> >
> > puppet   31400 66.6  4.9 369992 250588 ?       R    10:42 145:04 Rack:
> > /usr/share/puppet/rack/puppetmasterd
> >
> > puppet   31406 68.6  3.9 318292 200992 ?       R    10:42 149:31 Rack:
> > /usr/share/puppet/rack/puppetmasterd
> >
> > puppet   31414 67.0  2.4 243800 124476 ?       R    10:42 146:00 Rack:
> > /usr/share/puppet/rack/puppetmasterd
>
> Note that they're all running. That means there is none left to serve
> file content if they are all busy for several seconds (in our case
> around 20) while compiling catalogs.
>
> > Dec 15 13:42:23 puppet puppet-master[31406]: Compiled catalog for
> > puppet.foo.com <http://puppet.foo.com> in environment production in
> > 30.83 seconds
> > Dec 15 13:42:49 puppet puppet-agent[10515]: Caching catalog for
> > puppet.foo.com <http://puppet.foo.com>
> > Dec 15 14:00:18 puppet puppet-agent[10515]: Applying configuration
> > version '1292438512'
> > ...
> > Dec 15 14:14:56 puppet puppet-agent[10515]: Finished catalog run in
> > 882.43 seconds
> > Changes:
> >             Total: 6
> > Events:
> >           Success: 6
> >             Total: 6
> > Resources:
> >           Changed: 6
> >       Out of sync: 6
> >             Total: 287
>
> That's not a big number.
>
> > Time:
> >    Config retrieval: 72.20
>
> This is also suspect.
>
> >              Cron: 0.05
> >              Exec: 32.42
> >              File: 752.33
>
> Indeed.
>
> >        Filebucket: 0.00
> >             Mount: 0.98
> >           Package: 6.13
> >          Schedule: 0.02
> >           Service: 9.09
> >    Ssh authorized key: 0.07
> >            Sysctl: 0.00
> >
> > real    34m56.066s
> > user    1m6.030s
> > sys    0m26.590s
> >
>
> That just means your master are so busy serving catalogs that they
> barely have the time to serve files. One possibility is to use file
> content offloading (see one of my blog post about this:
> http://www.masterzen.fr/2010/03/21/more-puppet-offloading/).
>
> How many nodes are you compiling at the same time? Apparently you have 6
> master processes running at high CPU usage.
>
> As I said earlier, I really advise people to try puppet-load (which can
> be found in the ext/ directory of the source tarball since puppet 2.6)
> to execise load againts a master. This will help you find your actual
> concurrency.
>
> But, if it's a bug, then could this be an issue with passenger?
> --
> Brice Figureau
> My Blog: http://www.masterzen.fr/
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To post to this group, send email to puppet-us...@googlegroups.com.
> To unsubscribe from this group, send email to
> puppet-users+unsubscr...@googlegroups.com<puppet-users%2bunsubscr...@googlegroups.com>
> .
> For more options, visit this group at
> http://groups.google.com/group/puppet-users?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to