On Sun, Jan 13, 2019 at 11:41 AM Tom Lane <t...@sss.pgh.pa.us> wrote: > Putting a limit on the size of the syscaches doesn't accomplish anything > except to add cycles if your cache working set is below the limit, or > make performance fall off a cliff if it's above the limit.
If you're running on a Turing machine, sure. But real machines have finite memory, or at least all the ones I use do. Horiguchi-san is right that this is a real, not theoretical problem. It is one of the most frequent operational concerns that EnterpriseDB customers have. I'm not against solving specific cases with more targeted fixes, but I really believe we need something more. Andres mentioned one problem case: connection poolers that eventually end up with a cache entry for every object in the system. Another case is that of people who keep idle connections open for long periods of time; those connections can gobble up large amounts of memory even though they're not going to use any of their cache entries any time soon. The flaw in your thinking, as it seems to me, is that in your concern for "the likelihood that cache flushes will simply remove entries we'll soon have to rebuild," you're apparently unwilling to consider the possibility of workloads where cache flushes will remove entries we *won't* soon have to rebuild. Every time that issue gets raised, you seem to blow it off as if it were not a thing that really happens. I can't make sense of that position. Is it really so hard to imagine a connection pooler that switches the same connection back and forth between two applications with different working sets? Or a system that keeps persistent connections open even when they are idle? Do you really believe that a connection that has not accessed a cache entry in 10 minutes still derives more benefit from that cache entry than it would from freeing up some memory? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company