Re: [Puppet Users] apt, yum, downloads, and rsync infrastructure improvements

2017-06-30 Thread Daniel Dreier
On Fri, Jun 30, 2017 at 2:14 AM, SCHAER Frederic wrote: > Hi, > > > > I was (up to now) mirroring the puppetlabs repositories to both : > > - Make sure I have a local copy in case your repos are down, or > our internet link is too weak > > - Not hammer on your infrastructure wi

[Puppet Users] external facts script not working, but only when puppet agent is run as a daemon

2017-06-30 Thread Ugo Bellavance
Hi, I'm a beginner with PHP but I managed to get two scripts to work for generating external facts for a puppet module. These scripts work perfectly when I run puppet manually (puppet agent --test), but when the puppet agent is started as a daemon (systmctl start puppet), facts are not genera

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
What is the actual definition of store_usage? It's not very specific. Does it limit the number of KahaDB logs? If so what happens when that limit is reached? On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote: > > Last Sunday we hit a wall on our 3.0.2 puppetdb server. T

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
Well we do know where the user list is being generated and the module that is pulling the active directory list. Out of the thousands of id's only 10 are being used so the module owner is rewriting it. This does provide some confirmation of our theory. On Wednesday, June 28, 2017 at 12:25:57

Re: [Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Wyatt Alt
The userlist is most likely your your issue. This seems analogous to FACT-1345 and PDB-2631 for context -- we usually see this with the os, mountpoints, and partitions facts. What's generating the userlist and what are the numbers it's valued with? Do you have code that generates the fact? Ther

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
So if I'm reading this correctly, the userlist#~(number) represents the value of the userlist fact? If that is the case, the size of the userlist fact is 228k each and every time puppet agent runs with approximately 3300 nodes. On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
here is a delete query: < 2017-06-30 07:44:36.739 EDT >LOG: duration: 515.967 ms execute : DELETE FROM fact_values fv WHERE fv.id in ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37,$3

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
Some additional messages: < 2017-06-30 07:47:50.265 EDT >LOG: incomplete message from client < 2017-06-30 07:47:52.052 EDT >LOG: incomplete message from client < 2017-06-30 07:47:54.125 EDT >LOG: incomplete message from client < 2017-06-30 07:47:54.545 EDT >LOG: incomplete message from client <

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
Apparently the size of a response is limited and it cut off the rest of my message. Here are some others: < 2017-06-30 07:32:05.255 EDT >LOG: duration: 1220.789 ms execute S_6773/C_6774: SELECT fs.certname AS certname, env.name AS environment, fp.name AS name, fv.value AS value FROM factsets

RE: [Puppet Users] apt, yum, downloads, and rsync infrastructure improvements

2017-06-30 Thread SCHAER Frederic
Hi, I was (up to now) mirroring the puppetlabs repositories to both : - Make sure I have a local copy in case your repos are down, or our internet link is too weak - Not hammer on your infrastructure with our servers Unfortunately, we just noticed our mirroring suddenly got