Hello,

We have some performance issues with PuppetDB and I believe our low catalog 
duplication rate is partly responsible (7.5% atm). I would like to 
understand this problem better, and ask what's the best way to track down 
catalogs that change too often. 
I assumed the catalog hash (in certname_catalogs for example) shows whether 
two catalogs are dupes or not, is that correct? So when puppet runs, and 
there is a new hash associated with the given host in cername_catalogs, 
that means there was a change in configuration, and the old one is flushed, 
all its resources will be wiped when GC runs. 
If the above is correct, then what's the best way to monitor the changes in 
the catalog when the hash changes? Our agents send reports to Foreman, and 
I thought it's enough to look for reports where the number of applied 
resources is continuously greater than zero, but I found hosts where the 
applied value is 0, the skipped value is greater than zero, and the catalog 
hash changes after these runs. Does that mean that skipped steps could also 
count as catalog change?
Can PuppetDB's experimental report feature be used to easily track down 
these changes?

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to