Hi everyone,
I'm having some issues with an Apache + Tomcat setup behaving strangely.

Our Tomcat (6.0.16) servers have many ajp threads that are stuck
executing the the native sendbb method of the class
org.apache.tomcat.jni.Socket.
[You can find an example stack trace at the end of this message.]

Apparently, those threads have completed the elaboration of the
original requests (it may have taken a while to do that), and they are
trying to flush the output buffer.
This happens frequently when our servers are under heavy load: Tomcat
stays executing sendbb for a long time, and we usually find many
threads which are stuck in that state.

Every time this happens, the Tomcat process starts using a lot of CPU
time, and it goes like that for a few hours, when it doesn't crash
before.

The traffic is routed to the Tomcat servers (belonging to a cluster of
4 nodes) by two Apache web servers (2.0.52) with mod_jk (1.2.19).

We're having a hard time figuring out the cause for this behavior: it
may depend on the interaction between mod_jk and Tomcat, but I
couldn't find any definitive explanation for that by looking at the
documentation and this list's archives.
Any advice will be welcome :)

Below you'll find the configuration properties of mod_jk (pay
attention to the timeouts, are they too low?) and an example stack
trace for one of the stuck threads.

Thank you all.

Regards,
Alessandro Bahgat

*************************************************************

Our configuration is:
OS: Red Hat Enterprise Linux AS release 4 (Nahant Update 4)
JVM: Sun 1.6.0_10 23 bit
Apache: 2.0.52
mod_jk: 1.2.19
Tomcat: 6.0.16

*************************************************************

mod_jk properties:

# DefineNode1 (applprod01)
worker.applprod01.port=8009
worker.applprod01.host=###.###.###.###
worker.applprod01.type=ajp13
worker.applprod01.lbfactor=1
worker.applprod01.connection_pool_size=1
worker.applprod01.socket_keepalive=true
worker.applprod01.socket_timeout=5
worker.applprod01.connection_pool_timeout=5

*************************************************************

Sample stack trace for one of the "hang up" threads:

"ajp-8009-300" - Thread t...@962
java.lang.Thread.State: RUNNABLE
at org.apache.tomcat.jni.Socket.sendbb(Native Method)
at org.apache.coyote.ajp.AjpAprProcessor.flush(AjpAprProcessor.java:1181)
at 
org.apache.coyote.ajp.AjpAprProcessor$SocketOutputBuffer.doWrite(AjpAprProcessor.java:1268)
at org.apache.coyote.Response.doWrite(Response.java:560)
at 
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:349)
at 
org.apache.tomcat.util.buf.IntermediateOutputStream.write(C2BConverter.java:242)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:263)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:106)
- locked org.apache.tomcat.util.buf.writeconver...@1d7d7b0
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:190)
at org.apache.tomcat.util.buf.WriteConvertor.write(C2BConverter.java:196)
at org.apache.tomcat.util.buf.C2BConverter.convert(C2BConverter.java:81)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:438)
at org.apache.catalina.connector.CoyoteWriter.write(CoyoteWriter.java:143)
at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:277)
at java.io.PrintWriter.write(PrintWriter.java:382)
- locked org.apache.jasper.runtime.jspwriteri...@1919913
at org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:119)
at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:326)
at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:342)

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to