On Sat, Jan 04, 2020 at 06:56:48AM +0530, Amit Kapila wrote: > On Sat, Jan 4, 2020 at 6:19 AM Tom Lane <t...@sss.pgh.pa.us> wrote: > > =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellst...@mksoft.nu> writes: > > > I tried starting it from cron and then I got: > > > max_safe_fds = 981, usable_fds = 1000, already_open = 9 > > > > Oh! There we have it then. > > > > Right. > > > I wonder if that's a cron bug (neglecting > > to close its own FDs before forking children) or intentional (maybe > > it uses those FDs to keep tabs on the children?). > > > > So, where do we go from here? Shall we try to identify why cron is > keeping extra FDs or we assume that we can't predict how many > pre-opened files there will be?
The latter. If it helps, you could add a regress.c function leak_fd_until_max_fd_is(integer) so the main part of the test starts from a known FD consumption state. > In the latter case, we either want to > (a) tweak the test to raise the value of max_files_per_process, (b) > remove the test entirely. I generally favor keeping the test, but feel free to decide it's too hard.