Hi

čt 1. 6. 2023 v 8:53 odesílatel Laurenz Albe <laurenz.a...@cybertec.at>
napsal:

> On Thu, 2023-06-01 at 03:36 +0000, James Pang (chaolpan) wrote:
> > PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process
> memory per
> > backend, from Operating system and memorycontext dump “Grand total:”,
> both mached.
> > But from details, we found almost of entry belong to
>  “CacheMemoryContext”,
> > from this line  CacheMemoryContext: 8737352 total in 42 blocks; 1021944
> free (215 chunks); 7715408 used,
> > but there are thousands of lines of it’s child, the sum of blocks much
> more than “8737352” total in 42 blocks
> >
> > Our application use Postgresql JDBC driver with default
> parameters(maxprepared statement 256),
> > there are many triggers, functions in this database, and a few functions
> run sql by an extension
> > pg_background.  We have thousands of connections and have big concern
> why have thousands of entrys
> > of cached SQL ?  that will consume huge memory ,  anyway to limit the
> cached plan entry to save memory
> > consumption?  Or it looks like an abnormal behavior or bug to see so
> many cached plan lines.
>
> If you have thousands of connections, that's your problem.  You need
> effective connection pooling.
> Then 40MB per backend won't be a problem at all.  Having thousands of
> connections will cause
> other, worse, problems for you.
>
> See for example
>
> https://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/
>
> If you want to use functions, but don't want to benefit from plan caching,
> you can set
> the configuration parameter "plan_cache_mode" to "force_custom_plan".
>

The problem with too big of cached metadata can be forced by too long
sessions too.

In this case it is good to throw a session (connect) after 1hour or maybe
less.

Regards

Pavel


>
> Yours,
> Laurenz Albe
>
>
>

Reply via email to