Thanks Wyatt, I dropped it a while back after realizing how easy this was.
All is back now, and metrics are returning to normal. The large facts are
now gone and so are our issues. :-) I was wrong on the GC, it's at the
default of one hour. I was thinking node-ttl, which we set to 1 week.
Mike,
If you have no issue dropping and recreating the full database, that's
totally a workaround here (you know your requirements better than I, so
please don't take this as an endorsement of the approach :-) ).
To do this just stop PuppetDB, drop the puppetdb database, and recreate
the dat
Besides, drop, and let it recreate I mean.
On Tuesday, April 26, 2016 at 11:50:09 AM UTC-5, Mike Sharpton wrote:
>
> Thanks Wyatt. I see what you mean, this may take too long. What if I got
> desperate and decided to just drop the entire PuppetDB. Is there an easy
> way to do this? I really d
Thanks Wyatt. I see what you mean, this may take too long. What if I got
desperate and decided to just drop the entire PuppetDB. Is there an easy
way to do this? I really don't care about historical data as we use this
basically for monitoring of the environment.
On Tuesday, April 26, 2016 a
Hey Mike, give this a shot (in a psql session):
begin;
delete from facts where fact_path_id in (select id from fact_paths where
name=any('{"disks", "partitions", "mountpoints"}'));
delete from fact_paths where id not in (select fact_path_id from facts);
delete from fact_values where id not in (
Wyatt,
We implemented the code to make these facts be nothing. We can see it
working on nodes that are small, and it worked in our test environment.
However, we still have the issue that it cannot replace facts as PuppetDB
still chokes on emptying the facts for the nodes with large facts. I
On 04/19/2016 10:39 AM, Mike Sharpton wrote:
Again,*_thank you_* very much. If I could buy you a beer, I would.
The machines in question are a mix of RHEL5/6/7.
Hah, you're very welcome. Thanks for confirming the OS; that means this
isn't just a Solaris issue like that facter ticket suggest
Wyatt,
Thank you very much for your time and reply. I greatly appreciate it. I
ran your query and your suspicions are correct. Some DB servers lead the
pack with a massive amount of data due to all the disk that is there. We
will probably just make these facts nil on all machines as we don'
Hey Mike,
The unsatisfying answer is that PuppetDB handles giant facts
(particularly array-valued facts) pretty badly right now, and facter's
disks, partitions, and mountpoints facts can all get pretty huge in
cases such as SANs and similar. Can you try and see if the bulk of those
fact paths