Tobias,

You probably need to tune some kernel paramerters. I had some issues
with our application get "stuck" at some point that we needed to
restart everything. And since you said it is a brend new server, you
might have the defalt values set in there.

What Does "uname -a" say?

The kernel parameter controlling that changes from one UNIX flavor to
the next; generally it's named NFILES, MAXFILES or NINODE. I usually
tune these parameter for our Progress databases.
For Linux, this can be done dynamically by launching (fron the OS
prompt):

 echo "16384" >/proc/sys/fs/file-max

Regards,

Bruno

On Jan 24, 2008 10:26 PM, Tobias Schulz-Hess
<[EMAIL PROTECTED]> wrote:
> Hi there,
>
> we use the current Tomcat 6.0 on 2 machines. The hardware is brand new and is 
> really fast. We get lots of traffic which is usually handled well by the 
> tomcats and the load on those machines is between 1 and 6 (when we have lots 
> of traffic).
> The machines have debian 4.1/64 as OS.
>
> However, sometimes (especially if we have lots of traffic) we get the 
> following exception:
> INFO   | jvm 1    | 2008/01/23 15:28:18 | java.net.SocketException: Too many 
> open files
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
> java.net.PlainSocketImpl.socketAccept(Native Method)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
> java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
> java.net.ServerSocket.implAccept(ServerSocket.java:453)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
> java.net.ServerSocket.accept(ServerSocket.java:421)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
> org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServe
> rSocketFactory.java:61)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
> org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at 
> java.lang.Thread.run(Thread.java:619)
> I
>
> We already have altered the ulimit from 1024 (default) to 4096 (and therefore 
> proofing: yes, I have used google and read almost everything about that 
> exception).
>
> We also looked into the open files and all 95% of them are from or to the 
> Tomcat Port 8080. (The other 5% are open JARs, connections to memcached and 
> MySQL and SSL-Socket).
>
> Most of the connections to port 8080 are in the CLOSE_WAIT state.
>
> I have the strong feeling that something (tomcat, JVM, whatsoever) relies 
> that the JVM garbage collection will kill those open connections. However, if 
> we have heavy load, the garbage collection is suspended and then the 
> connections pile up. But this is just a guess.
>
> How can this problem be solved?
>
> Thank you and kind regards,
>
> Tobias.
>
> -----------------------------------------------------------
> Tobias Schulz-Hess
>
> ICS - Internet Consumer Services GmbH
> Mittelweg 162
> 20148 Hamburg
>
> Tel:    +49 (0) 40 238 49 141
> Fax:    +49 (0) 40 415 457 14
> E-Mail: [EMAIL PROTECTED]
> Web:    www.internetconsumerservices.com
>
> Projekte
> www.dealjaeger.de
> www.verwandt.de
>
> ICS Internet Consumer Services GmbH
> Geschäftsführer: Dipl.-Kfm. Daniel Grözinger, Dipl.-Kfm. Sven Schmidt
> Handelsregister: Amtsgericht Hamburg HRB 95149
>
>
>

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to