On Wed, Nov 29, 2017 at 11:17 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > The thing that makes me uncomfortable about this is that we used to have a > catcache size limitation mechanism, and ripped it out because it had too > much overhead (see commit 8b9bc234a). I'm not sure how we can avoid that > problem within a fresh implementation.
At the risk of beating a dead horse, I still think that the amount of wall clock time that has elapsed since an entry was last accessed is very relevant. The problem with a fixed maximum size is that you can hit it arbitrarily frequently; time-based expiration solves that problem. It allows backends that are actively using a lot of stuff to hold on to as many cache entries as they need, while forcing backends that have moved on to a different set of tables -- or that are completely idle -- to let go of cache entries that are no longer being actively used. I think that's what we want. Nobody wants to keep the cache size small when a big cache is necessary for good performance, but what people do want to avoid is having long-running backends eventually accumulate huge numbers of cache entries most of which haven't been touched in hours or, maybe, weeks. To put that another way, we should only hang on to a cache entry for so long as the bytes of memory that it consumes are more valuable than some other possible use of those bytes of memory. That is very likely to be true when we've accessed those bytes recently, but progressively less likely to be true the more time has passed. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company