Andreas,
I believe that what several people here are trying to tell you, is that
you are apprently starting a massive amount (*) of threads, on a 32-bit
machine where the JVM cannot address more than 2 GB or so anyway.
In addition, you have a problem with a webapp leaving connections
"half-open", thus hanging on to more RAM than it should.
And you are also using a rather large value for keepAliveTimeout, which
may explain why a lot of threads may be hanging around doing nothing,
but unable to handle new requests.
I believe that you may be wrongly focused on increasing the number of
threads or the memory available to them, rather than trying to see why
you need so many in the first place.
So maybe you are trying to cure the symptom, rather than the cause.
The basic question is : how long does it take, on average, to process
one request ? and how many separate requests are you receiving in such
a same period of time ?
That will tell you, roughly, how many threads you really need to process
these requests. Do you really need 3000 ?
And if you do, then you probably need a bigger boat..
connectionTimeout
The number of milliseconds this Connector will wait, after accepting a
connection, for the request URI line to be presented. The default value
is infinite (i.e. no timeout).
keepAliveTimeout
The number of milliseconds this Connector will wait for another AJP
request before closing the connection. The default value is to use the
value that has been set for the connectionTimeout attribute.
(*) (by which they mean "probably much too many")
andreas müller wrote:
-------- Original-Nachricht --------
Datum: Wed, 19 May 2010 13:42:28 +0100
Von: Peter Crowther <peter.crowt...@melandra.com>
An: Tomcat Users List <users@tomcat.apache.org>
Betreff: Re: Tomcat 6.0.20 "unable to create new native thread"
On 19 May 2010 13:26, <tom...@habmalnefrage.de> wrote:
java.lang.OutOfMemoryError: unable to create new native thread
first. Thank you all for your fast responses.
OK, so one possibility is that the Windows thread table is full.
Is there any way to calculate the max threadcount our system can handle?
Or is there any way to expand the thread table that windows can handle more
threads?
- maxThreads for HTTP: 450
- maxThreads for jk: 3000
That's a huge number of threads for one process.
- MaxThreads for HTTP: 800
- MaxThreads for jk: 450
That's merely massive.
- all connections shown by netstat -an (not filtered): 4595
- connections in state close_wait: 3152
That has nothing to do with threads.
On the other hand: Shouldn't windows start to swap if the ram is full?
Yes. But that's not the error you're getting.
In which memory-area does windows handle the memory which is used for
the threads? Is it shown in the taskmanager?
Task manager, Processes tab, View, Select Columns..., tick "Thread Count".
Can the OS take the mem which is still unused by the JVM
(memMax-memTotal) for handling threads or is it reserved for the JVM after
starting
tomcat?
You are reserving heap and permgen memory when the JVM starts. Thread
memory, and kernel resources for threads, are outside of this total.
ok, I will analyse the max mem usage of both tomcats and will decrase the jvm
max mem a bit. Hopefully the OS has more mem for threadhandling.
Due to problems with one of our webapps which sometimes does not close
the threads completely (they stuck in close_wait-state) we increased the max
threads of windows:
http://publib.boulder.ibm.com/infocenter/pvcvoice/51x/index.jsp?topic=/com.ibm.websphere.wvs.doc/wvs/tun_conwin.html
That has nothing to do with threads (and CLOSE_WAIT is a feature of
TCP, not of threads). That link contains no information about
increasing the maximum number of threads in Windows. It links to
TCP/IP tuning. Wrong link, or wrong assumption?
maxUserPorts have been set to about 30k if i remember correctly.
That has nothing to do with threads.
Sorry, i meant we increased the max count of available ports, not threads.
But i thought every connection from apache to tomcat results in a thread
within tomcat (jk-connector). But I don't know yet, if the threads of tomcat
itself and of the connectors are still alive when the connection is in
close_wait or if the threads are destroyed.
Does anyone have an idea to get rid of the exception?
*Decrease* the number of threads in use in Tomcat 1. If you set the
HTTP and JK maxima to the same as Tomcat 2 (which is still very
large), what happens? Why?
It should be safe to decrease the max-threads for http (maybe to 10 or so
because it is only used for administration and monitoring).
After the last downtime 800 jk-threads have been full, thats why I increased
it. But i could try to go down to 1200 or 1500. Also i can try to decrease the
max threads of tomcat 2.
Question to the max-Threads defined for tomcat-conectors:
Is the virtual memory for all defined threads allocated when starting tomcat?
The TaskManager shows 514 threads for Tomcat 1 while in sum 3450 threads for
connections are as max defined in its configuration.
But i will change the config today as told above. Within the next days we will
see if the problem is solved.
Many thanks again,
Andreas
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org