On Fri, Dec 14, 2012 at 11:00 AM, scukao...@gmail.com
<scukao...@gmail.com>wrote:

> hi, all
>
> I have a master for about 3000 clients, runinternal is 20 min.
> A problem is that puppetdb's mq is too long so that data in postgre is the
> OLD version. any suggests to improve performance ?
>

We'll need some more information in order to get a complete picture of
what's going on. Can you send a screenshot of your puppetdb web console
after you've had it up for a few minutes? In particular, we'd need to see
the rate at which your queue is growing, metrics around command processing
time and enqueue time, as well as information about catalog & resource
duplication rates. Also, what version of Postgres are you using, and have
you tuned any of the settings? Lastly, is the puppetdb system experiencing
a lot of iowait, or is it otherwised blocked on i/o or system time?

With that many nodes and a 20 minute runinterval, you'll be sending ~2.5
catalogs every second to puppetdb...it's not insurmountable, but we'll need
to look at the whole system in totality to see where the bottleneck is. :)

The first-order thing I'd look at is your catalog duplication rate...if
that's low, then puppetdb is doing a lot of reads/writes to the database
for every catalog received. Improving that number will help let puppetdb
short-circuit writing to disk at all, which should improve throughput
considerably.

You can also come and find us on IRC at #puppet on Freenode; that may make
for a faster back-and-forth.

Thanks!

--
deepak / Puppet Labs

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to