> I'm not familiar with BSD but Linux has similar Kernel options. The kernel > options might be *global* flags to set the total upper limit of open file > descriptors for the entire system, not for a single process. > Also on Linux "ulimit" doesn't display the fd limit. You have to use "ulimit > -n".
This is a dedicated machine doing nothing else .. I'm monitoring global FD usage sysctl kern.openfiles and it's way beyond the configured limit $ ulimit -n 200000 > > Why do you need more than 32k file descriptors anyway? It's an insanely high It's not for files: This is a network service .. I tested it with up to 50k TCP connections .. however at this point, when the service tries to open a file, it'll bail out. Sockets+Files both contribute to open FDs. I need 50k sockets + 100 files. Thus, this is even more strange: the Python (a Twisted service) will happily accept 50k sockets, but as soon as you do open() a file, it'll bail out. -- http://mail.python.org/mailman/listinfo/python-list