Just out of curiosity, what is your catalog duplication rate?

On Tuesday, October 29, 2013 3:26:20 AM UTC+1, David Mesler wrote:
>
> I reconfigured postgres based on the recommendations from pgtune and your 
> document. I still had a lot of agent timeouts and eventually after running 
> overnight the command queue on the puppetdb server was over 4000. Maybe I 
> need a box with traditional RAID and a lot of spindles instead of the SSD. 
> Or maybe I need a cluster of postgres servers (if that's possible), I don't 
> know. The puppetdb docs said a laptop with a consumer grade SSD was enough 
> for 5000 virtual nodes so I was optimistic this would be a simple setup. Oh 
> well. 
>
> On Thursday, October 24, 2013 1:02:55 PM UTC-4, Ken Barber wrote:
>>
>> pgtune is probably a good place to start: 
>> https://github.com/gregs1104/pgtune ... available as an rpm/deb on the 
>> more popular distros I believe. 
>>
>> Also, this is probably very premature, but I have a draft doc with 
>> notes for how to tune your DB for PuppetDB: 
>>
>>
>> https://docs.google.com/document/d/1hpFbh2q0WmxAvwfWRlurdaEF70fLc6oZtdktsCq2UFU/edit?usp=sharing
>>  
>>
>> Use at your own risk, as it hasn't been completely vetted. Happy to 
>> get any feedback on this, as I plan on making this part of our 
>> endorsed documentation. 
>>
>> Also ... there is an index that lately has been causing people 
>> problems 'idx_catalog_resources_tags_gin'. You might want to try 
>> dropping it to see if it improves performances (thanks to Erik Dalen 
>> and his colleagues for that one): 
>>
>> DROP INDEX idx_catalog_resources_tags_gin; 
>>
>> It is easily restored if it doesn't help ... but may take some time to 
>> build: 
>>
>> CREATE INDEX idx_catalog_resources_tags_gin 
>>   ON catalog_resources 
>>   USING gin 
>>   (tags COLLATE pg_catalog."default"); 
>>
>> ken. 
>>
>> On Thu, Oct 24, 2013 at 4:55 PM, David Mesler <david....@gmail.com> 
>> wrote: 
>> > Hello, I'm currently trying to deploy puppetdb to my environment but 
>> I'm 
>> > having difficulties and am unsure on how to proceed. 
>> > I have 1300+ nodes checking in at 15 minute intervals (3.7 million 
>> resources 
>> > in the population). The load is spread across 6 puppet masters. I 
>> > requisitioned what I thought would be a powerful enough machine for the 
>> > puppetdb/postgres server. A machine with 128GB of RAM, 16 physical cpu 
>> > cores, and a 500GB ssd for the database. I can point one or two of my 
>> puppet 
>> > masters at puppetdb with reasonable enough performance, but anymore and 
>> > commands start stacking up in the puppetdb command queue and agents 
>> start 
>> > timing out. (Actually, even with just one puppet master using puppetdb 
>> I 
>> > still have occasional agent timeouts.) Is one postgres server not going 
>> to 
>> > cut it? Do I need to look into clustering? I'm sure some of you must 
>> run 
>> > puppetdb in larger environments than this, any tips? 
>> > 
>> > -- 
>> > You received this message because you are subscribed to the Google 
>> Groups 
>> > "Puppet Users" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an 
>> > email to puppet-users...@googlegroups.com. 
>> > To post to this group, send email to puppet...@googlegroups.com. 
>> > Visit this group at http://groups.google.com/group/puppet-users. 
>> > For more options, visit https://groups.google.com/groups/opt_out. 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/34a832fd-dcbb-4ffe-ae99-3e0ae80f24cc%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to