DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG 
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://nagoya.apache.org/bugzilla/show_bug.cgi?id=5735>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND 
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=5735

HTTP connector running out of processors under heavy load





------- Additional Comments From [EMAIL PROTECTED]  2002-02-17 03:50 -------
When trying to put a new site into production today I ran into a similar problem
at least four times using mod_jk and Ajp13.  Each time I had to shutdown and
restart Tomcat.

Everything would work fine for several hours.  Tomcat would end up with ~50
Ajp13Processors.  maxProcessors was 75.  Then something would change.  All
of a sudden Tomcat would start creating additional Ajp13 processors one after
another, in a few minutes  hitting the max of 75, then it started rejecting
connections.  Because of the repeatable nature of this, and after reviewing
logs, this doesn't appear to be due to a sudden increase in traffic to the
site.


The Apache server has the following config:

Solaris 7, Apache 1.3.22, mod_jk built from cvs ~ 1 week ago.

Tomcat is running on a different server.

Solaris 8, Tomcat 4.1-dev built from cvs ~ 3-4 weeks ago, jk and ajp jars
built from cvs ~ 1 week ago.  Java(TM) 2 Runtime Environment, Standard Edition
(build 1.3.1_02-b02)

I reviewed all the logs including the apache mod_jk.log and could not
find anything obvious that may have triggered this behaviour.

FYI, we have another site running with an identical Apache/Tomcat config,
but much lower volume.  It has been running fine for over a week.

--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to