On Fri, Feb 7, 2020 at 11:14 AM Justin wrote:
>
> On Fri, Feb 7, 2020 at 1:56 PM Sam Gendler
> wrote:
>
>> Benchmarks, at the time, showed that performance started to fall off due
>> to contention if the number of processes got much larger. I imagine that
>> the speed of storage today would may
On Fri, Feb 7, 2020 at 1:56 PM Sam Gendler
wrote:
> Benchmarks, at the time, showed that performance started to fall off due
> to contention if the number of processes got much larger. I imagine that
> the speed of storage today would maybe make 3 or 4x core count a pretty
> reasonable place to
On Fri, Feb 7, 2020 at 5:36 AM Steve Atkins wrote:
> What's a good number of active connections to aim for? It probably depends
> on whether they tend to be CPU-bound or IO-bound, but I've seen the rule of
> thumb of "around twice the number of CPU cores" tossed around, and it's
> probably a dece
On Fri, Feb 7, 2020 at 6:29 AM Justin wrote:
> WorkMem is the biggest consumer of resources lets say its set to 5 megs
> per connection at 1000 connections that 5,000 megs that can be allocated.
>
Clarification- work_mem is used per operation (sort, hash, etc) and could
be many many times with
On 07/02/2020 13:18, Chris Withers wrote:
On 07/02/2020 12:49, Chris Ellis wrote:
What's "too much" for max_connections? What happens when you set it to
high? What factors affect that number?
When sizing max_connections you need to trade off how many
connections your application will us
Hi Chris Withers
As stated each connection uses X amount of resources and its very easy to
configure Postgresql where even small number of connections will each up
all the RAM
WorkMem is the biggest consumer of resources lets say its set to 5 megs
per connection at 1000 connections that 5,000 me
On 07/02/2020 12:49, Chris Ellis wrote:
What's "too much" for max_connections? What happens when you set it to
high? What factors affect that number?
When sizing max_connections you need to trade off how many connections
your application will use at peak vs how much RAM and CPU you have.
Hi Chris
On Fri, 7 Feb 2020, 08:36 Chris Withers, wrote:
> Hi All,
>
> What's a sensible way to pick the number to use for max_connections?
>
Sensible in this context is some what variable. Each connection in
PostgreSQL will be allocated a backend process. These are not the lightest
weight of
Hi All,
What's a sensible way to pick the number to use for max_connections?
I'm looking after a reasonable size multi-tenant cluster, where the
master handles all the load and there's a slave in case of hardware
failure in the master.
The machine is used to host what I suspect are mainly djan