Can anyone tell me that if the max_connections is above 100, the server will
use pooling instead?

For all participants in this particular dsicuss, what is the reasonable
value for max_connections without causing any harm to the Postgres 9.0
server.

I am a nonvice Postgres user so any advice is always welcomed.

Thanks,

On Wed, May 25, 2011 at 10:58 PM, Craig Ringer
<cr...@postnewspapers.com.au>wrote:

> There might be a very cheap and simple way to help reduce the number of
> people running into problems because they set massive max_connections values
> that their server cannot cope with instead of using pooling.
>
> In the default postgresql.conf, change:
>
> max_connections = 100                   # (change requires restart)
> # Note:  Increasing max_connections costs ~400 bytes of shared memory
> # per connection slot, plus lock space (see max_locks_per_transaction).
>
> to:
>
> max_connections = 100                   # (change requires restart)
> # WARNING: If you're about to increase max_connections above 100, you
> # should probably be using a connection pool instead. See:
> #     http://wiki.postgresql.org/max_connections
> #
> # Note:  Increasing max_connections costs ~400 bytes of shared memory
> # per connection slot, plus lock space (see max_locks_per_transaction).
> #
>
>
> ... where wiki.postgresql.org/max_connections (which doesn't yet exist)
> explains the throughput costs of too many backends and the advantages of
> configuring a connection pool instead.
>
> Sure, this somewhat contravenes the "users don't read - ever" principle,
> but we can hope that _some_ people will read a comment immediately beside
> the directive they're modifying.
>
> --
> Craig Ringer
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Edison

Reply via email to