Hi Lukas, On Thu, Jul 11, 2024 at 12:17:53PM +0200, Lukas Tribus wrote: > Hi, > > I will get back to this for further research and discussion in about a week.
OK! In the mean time I'll revert the pending patches from 3.0 so that we can issue 3.0.3 without them. > In the meantime, do we agree that the environment we are developing the fix > for is the following: > > the hard limit is always set to the maximum available in the kernel which > on amd64 is one billion with a B, whether the systems has 128M or 2T of > memory is irrelevant. > > You agree that this is the environment systemd sets us up with, right? I'm having a doubt by not being certain I'm parsing the question correctly :-) Just to rephrase the goal here, it's to make sure that when the service is started with extreme (and absurd) limits, we don't use all of what is offered but only a smaller subset that matches what was commonly encountered till now. So if we start with 1B fd regardless of the amount of RAM, we want to limit to a lower value so as not to OOM for no reason. One FD takes 64 bytes. Starting haproxy with 1M FDs here takes ~80 MB, which I consider not that extreme by todays sizes, and that in practice few people have reported problems about. Of course establishing connections on all of these will use way more memory but that already depends on the traffic pattern and configuration (SSL front/back, etc). My take on this limit is that most users should not care. Those dealing with high loads have to do their homework and are used to doing this, and those deploying in extremely small environments are also used to adjusting limits (even sometimes rebuilding with specific options), and I'm fine with leaving a bit of work for both extremities. Willy