On Mon, Feb 25, 2019 at 3:50 AM Tsunakawa, Takayuki <tsunakawa.ta...@jp.fujitsu.com> wrote: > How can I make sure that this context won't exceed, say, 10 MB to avoid OOM?
As Tom has said before and will probably say again, I don't think you actually want that. We know that PostgreSQL gets roughly 100x slower with the system caches disabled - try running with CLOBBER_CACHE_ALWAYS. If you are accessing the same system cache entries repeatedly in a loop - which is not at all an unlikely scenario, just run the same query or sequence of queries in a loop - and if the number of entries exceeds 10MB even, perhaps especially, by just a tiny bit, you are going to see a massive performance hit. Maybe it won't be 100x because some more-commonly-used entries will always stay cached, but it's going to be really big, I think. Now you could say - well it's still better than running out of memory. However, memory usage is quite unpredictable. It depends on how many backends are active and how many copies of work_mem and/or maintenance_work_mem are in use, among other things. I don't think we can say that just imposing a limit on the size of the system caches is going to be enough to reliably prevent an out of memory condition unless the other use of memory on the machine happens to be extremely stable. So I think what's going to happen if you try to impose a hard-limit on the size of the system cache is that you will cause some workloads to slow down by 3x or more without actually preventing out of memory conditions. What you need to do is accept that system caches need to grow as big as they need to grow, and if that causes you to run out of memory, either buy more memory or reduce the number of concurrent sessions you allow. It would be fine to instead limit the cache memory if those cache entries only had a mild effect on performance, but I don't think that's the case. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company