On 2012/03/28 12:55, Kevin wrote: > The only issue I seem to be having is a *ton* (tens of thousands) of > random instances where the logfile repeatedly records 'too many open > files' errors for several minutes on end.
Haven't seen this myself, I would recommend fstat to start with, maybe ktrace if it's not obvious from that. Between the network sockets to clients and network or unix domain sockets to fastcgi/other backends, you will bump into the default file descriptor limits more quickly than when running apache with mod_php. > Then, it stops as suddenly > as it starts only to return again in a couple of hours. > > Curiously, when this happens, relayd is happy as a clam as are our > server monitors. Repeated manual checking of the sites shows nothing > wrong either. Fast, complete page loads, no broken images, nothing. There are separate file descriptor limits for different things: openfiles-cur, openfiles-max, as well as the kernel limits. "Too many open files" does specifically apply to a process not the whole system, see ENFILE/EMFILE in errno(2). Note that relayd in particular raises its own file descriptor limit up to openfiles-max.