We have a customer project where Postgres is using too many file handles during 
peak times (around 150.000)

Apart from re-configuring the operating system (CentOS) this could also be 
mitigated by lowering max_files_per_process.

I wonder what performance implications that has on a server with around 50-100 
active connections (through pgBouncer).

One of the reasons (we think) that Postgres needs that many file handles is the 
fact that the schema is quite large (in terms of tables and indexes) and the 
sessions are touching many tables during their lifetime.

My understanding of the documentation is, that Postgres will work just fine if 
we lower the limit, it simply releases the cached file handles if the limit is 
reached. But I have no idea how expensive opening a file handle is in Linux.

So assuming the sessions (and thus the queries) actually do need that many file 
handles, what kind of performance impact (if any) is to be expected by lowering 
that value for Postgres to e.g. 500?

Regards
Thomas

Reply via email to