Thanks! I will run these suggestions with App team.

On Fri, May 20, 2022 at 4:01 PM Laurenz Albe <laurenz.a...@cybertec.at>
wrote:

> On Fri, 2022-05-20 at 12:15 +0200, Andreas Kretschmer wrote:
> > On 20 May 2022 10:27:50 CEST, aditya desai <admad...@gmail.com> wrote:
> > > One of our applications needs 3000 max_connections to the database.
> > > Connection pooler like pgbouncer or pgpool is not certified within the
> > > organization yet. So they are looking for setting up high configuration
> > > Hardware with CPU and Memory. Can someone advise how much memory and
> CPU
> > > they will need if they want max_conenction value=3000.
> >
> > Pgbouncer would be the best solution. CPU: number of concurrent
> connections.
> > RAM: shared_buffer + max_connections * work_mem + maintenance_mem +
> operating system + ...
>
> Right.  And then hope and pray that a) the database doesn't get overloaded
> and b) you don't hit any of the database-internal bottlenecks caused by
> many
> connections.
>
> I also got the feeling that the Linux kernel's memory accounting somehow
> lags.
> I have seen cases where every snapshot of "pg_stat_activity" I took showed
> only a few active connections (but each time different ones), but the
> amount of allocated memory exceeded what the currently active sessions
> could
> consume.  I may have made a mistake, and I have no reproducer, but I would
> be curious to know if there is an explanation for that.
> (I am aware that "top" shows shared buffers multiple times).
>
> Yours,
> Laurenz Albe
>

Reply via email to