So after a change from the module owner who's fact's were very very large, the java CPU has been reduced significantly and running much better. However, now that the facts have changed for every single node, the DB is doing a significant amount of work to clean things up. And the KahaDB queue is still growing out of control.
At this point it might be a better option to stop the puppetdb server, shutdown postgresql, delete the data directory (after copying pg_hba.conf and postgresql.conf to /tmp), init a new db, copy those 2 files from /tmp back to their original spot, start postgresql and start puppetdb allowing it to create everything it needs from scratch. Any opinions? On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote: > > Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked > and the KahaDB logs started to grow eventually almost filling a > filesystem. I stopped the service, removed the mq directory per a > troubleshooting guide, and restarted. After several minutes the same > symptoms began again and I have not been able to come up with a puppetdb or > postgresql config to fix this. > > We tried turning off storeconfig in the puppet.conf file on our puppet > master servers but that doesn't appear to have resolved the problem. I > also can't find a good explanation as to what this parameter really does or > does not do even in the puppet server documentation. Anyone have a better > insight into this? > > Also is there a way to just turn off puppetdb? > > I've attached a file that is a snapshot of the puppetdb dashboard. > > Anyone experience anything like this? > -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/42d78acf-727e-406f-a2c1-f6253121991b%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.