Jelte Fennema-Nio <postg...@jeltef.nl> writes:
> The default open file limit of 1024 is extremely low, given modern
> resources and kernel architectures. The reason that this hasn't changed
> change is because doing so would break legacy programs that use the
> select(2) system call in hard to debug ways. So instead programs that
> want to opt-in to a higher open file limit are expected to bump their
> soft limit to their hard limit on startup. Details on this are very well
> explained in a blogpost by the systemd author[1].

On a handy Linux machine (running RHEL9):

$ ulimit -n
1024
$ ulimit -n -H
524288

I'm okay with believing that 1024 is unreasonably small, but that
doesn't mean I think half a million is a safe value.  (Remember that
that's *per backend*.)  Postgres has run OSes out of FDs in the past
and I don't believe we couldn't do it again.

Also, the argument you cite is completely recent-Linux-centric
and does not consider the likely effects on other platforms.
To take one example, on current macOS:

$ ulimit -n
4864
$ ulimit -n -H
unlimited

(Hm, so Apple wasn't impressed by the "let's not break select(2)"
argument.  But I digress.)

I'm afraid this patch will replace "you need to tune ulimit -n
to get best performance" with "you need to tune ulimit -n to
avoid crashing your system".  Does not sound like an improvement.

Maybe a sanity limit on how high we'll try to raise the ulimit
would help.

                        regards, tom lane


Reply via email to