On Wed, Sep 6, 2017 at 7:54 AM, Jonny Grant <j...@jguk.org> wrote: > > > On 15/08/17 12:45, Dmitry V. Levin wrote: >> >> On Tue, Aug 15, 2017 at 08:19:13AM +0100, Jonny Grant wrote: >>> >>> On 15/08/17 00:50, Paul Eggert wrote: >>>> >>>> Jonny Grant wrote: >>>>> >>>>> do you know which kernel API has this limitation? >>>> >>>> >>>> All kernels have a limitation there to some extent, except perhaps the >>>> Hurd. Sorry, I don't know what the limits are. >>> >>> >>> Ok thank you. >>> >>> I imagine kernels just need a dynamic API, so it doesn't need to be a >>> fixed buffer. >> >> >> It's a security limit rather than a fixed buffer, see e.g. >> >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=da029c11e6b12f321f36dac8771e833b65cec962 > > > Thank you for your reply. > > My Ubuntu 16.04 limit is 2MB it seems: > $ getconf ARG_MAX > 2097152 > > > This laptop has 16GB RAM, so it is a shame it isn't much bigger, or dynamic > so can be expanded when needed somehow. Those mapped pages of RAM wouldn't > be wasted, as just VM right? > > I imagine a lot of people may have 60,000 files in a directory like me these > days. Latest Linux kernel just added support for billions of files per > directory I read. >
If this is a strict requirement, you could switch to Hurd. I checked with a Hurd developer(?) some time ago and one of their design philosophies is no artificial limits. I wasn't able to find an actual citation for this behavior, sadly. R0b0t1.