On Fri, Nov 17, 2017 at 8:09 AM, Vladimir Rusinov <vrusi...@google.com> wrote: > FWIW, lack of per-connection and/or global memory limit for work_mem is major > PITA when running shared and/or large-scale setup. > > Currently we are doing a poor job with the work_mem parameter because we > don't have a good way to let our customers increase it without also giving > them ability to shoot themselves in a foot. > Even a simple param limiting global total number of work_mem buffers would > help here.
I suspect that we can do better here just by allocating memory more sensibly in a very simple way (something like my hash_mem proposal). The relationship between aggregate memory usage and aggregate throughput is very non-linear. One can imagine giving more memory to hash joins, making each hash join much faster, having the overall effect of *reducing* aggregate memory usage. The DBA can be more generous with memory while actually decreasing aggregate memory usage. This is at least possible with work_mem consuming operations that involve hashing, like hash join and hash aggregate. Simple benchmarking tools like pgbench enforce the idea that meeting throughput requirements is the most important thing, but in reality workloads are usually very bursty. It is often more important to be able to stay on a smaller instance size while maintaining less than excellent (but still acceptable) performance. Again, it's about the economics. -- Peter Geoghegan