work_mem can be used many times per connection given it is per sort, hash,
or other operations and as mentioned that can be multiplied if the query is
handled with parallel workers. I am guessing the server has 16GB memory
total given shared_buffers and effective_cache_size, and a more reasonable
work_mem setting might be on the order of 32-64MB.

Depending on the type of work being done and how quickly the application
releases the db connection once it is done, max connections might be on the
order of 4-20x the number of cores I would expect. If more simultaneous
users need to be serviced, a connection pooler like pgbouncer or pgpool
will allow those connections to be re-used quickly.

These numbers are generalizations based on my experience. Others with more
experience may have different configurations to recommend.

>

Reply via email to