Hi,

1) How many fds does the process have, so is the question "why can't we use all those 4096 fds configured", or is it "Where do those 4096 fdsused by my process come from"?

2) CLOSE_WAIT means the remote side closed the connection and the local side didn't yet close it. What's you remote side with respect to TCP? Is it browsers, or a load balancer or stuff like that?

3) Are you using keep alive (not implying that's the cause of your problems, but keep alive makes the connection live cycle much more complicated from the container point of view).

Regards,
Rainer


Tobias Schulz-Hess wrote:
Hi there,

we use the current Tomcat 6.0 on 2 machines. The hardware is brand new and is 
really fast. We get lots of traffic which is usually handled well by the 
tomcats and the load on those machines is between 1 and 6 (when we have lots of 
traffic).
The machines have debian 4.1/64 as OS.

However, sometimes (especially if we have lots of traffic) we get the following 
exception:
INFO   | jvm 1    | 2008/01/23 15:28:18 | java.net.SocketException: Too many 
open files
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
java.net.PlainSocketImpl.socketAccept(Native Method)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
java.net.ServerSocket.implAccept(ServerSocket.java:453)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
java.net.ServerSocket.accept(ServerSocket.java:421)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServe
rSocketFactory.java:61)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
java.lang.Thread.run(Thread.java:619)
I

We already have altered the ulimit from 1024 (default) to 4096 (and therefore 
proofing: yes, I have used google and read almost everything about that 
exception).

We also looked into the open files and all 95% of them are from or to the 
Tomcat Port 8080. (The other 5% are open JARs, connections to memcached and 
MySQL and SSL-Socket).

Most of the connections to port 8080 are in the CLOSE_WAIT state.

I have the strong feeling that something (tomcat, JVM, whatsoever) relies that 
the JVM garbage collection will kill those open connections. However, if we 
have heavy load, the garbage collection is suspended and then the connections 
pile up. But this is just a guess.

How can this problem be solved?

Thank you and kind regards,

Tobias.

-----------------------------------------------------------
Tobias Schulz-Hess

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to