On Fri, Aug 21, 2015 at 6:14 PM, Daryl King <allnatives.onl...@gmail.com>
wrote:

> Thanks Ryan. Strangely when running "ulimit -n" it returns 65536 in a ssh
> session, but 1024 in webmin? Which one would be correct?
>

Limits set by the ulimit command (and the setrlimit syscall) are correct if
they are high enough to allow a correctly functioning program to perform
its task. They are incorrect if set too low for the needs of a correctly
functioning program or so high that a malfunctioning program is able to
adversely affect the functioning of other processes. So the answer to your
question is: it depends.

Having said that it is very unusual these days for "ulimit -n" to be set
too high. Supporting thousands of open files in a single process is
normally pretty cheap in terms of kernel memory, CPU cycles, etc. So if you
have a reason to think your program (e.g., httpd) has a legitimate need to
have more than 1024 files open simultaneously go ahead and increase the
"ulimit -n" (which is the setrlimit RLIMIT_NOFILE parameter) to a higher
value.

However, in my experience it is unusual for a too low limit on the number
of open files to result in a segmentation fault. Especially in a well
written program like Apache HTTPD. A well written program will normally
check whether the open (or any syscall which returns a file descriptor)
failed and refuse to use the -1 value as if it were a valid file descriptor
number. So I would be surprised if increasing that value resolved the
segmentation fault.

-- 
Kurtis Rader
Caretaker of the exceptional canines Junior and Hank

Reply via email to