On 07/02/2014 17:26, Jesse Barnum wrote: > On Feb 7, 2014, at 2:38 AM, Mark Thomas <ma...@apache.org> wrote: > >> Jesse Barnum <jsb_tom...@360works.com> wrote: >> >> Thanks for such a well written question. All the relevant information is >> available and presented clearly and logically. > > Glad I could help. I get error reports from my users all the time like "I > installed the update and now it doesn't work", so I know how frustrating that > can be :-) > >> >>> Problem summary: >>> My nio polling threads are using too much CPU time. >> >> Are you sure that is the real problem? It sounds like the occasional >> slowness is the problem and the polling threads are the suspected - emphasis >> on the suspected - root cause. > > That's true. I've tried running it with a profiler (YourKit) during peak > time, and I think I've found the real culprit - reading from the Coyote > InputStream is blocking. > > Out of 98,515 milliseconds spent in javax.servlet.http.HttpServlet.service: > * 95,801 milliseconds are spent calling > org.apache.catalina.connector.CoyoteInputStream.read( byte[] ). Of this time, > 81,707 ms are spent waiting for KeyAttachment.awaitLatch. > * 2,698 milliseconds are spent on business logic and MySQL calls > * 16 milliseconds are spent closing the input stream and setting HTTP headers > > The last Tomcat code I see is > org.apache.tomcat.util.net.NioEndpoint$KeyAttachment.awaitLatch( > CountDownLatch, long, TimeUnit ). I don't really know what that means or > does, but it seems to be the holdup. > > If you want me to send you the raw profiler information, let me know how to > send it and I'll be happy to (although I'm running an old version of YourKit, > 9.5.6). I can also send screen shots if that's helpful / easier. > > Here is a typical thread dump while the request is being serviced: >> Thread name: ajp-nio-8009-exec-19 / ID: 55 / state: TIMED_WAITING >> Milliseconds spent waiting: 47,475,371 / blocked: 2,389 >> sun.misc.Unsafe.$$YJP$$park(Native Method) >> sun.misc.Unsafe.park(Unsafe.java) >> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) >> >> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033) >> >> >> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326) >> >> java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282) >> >> org.apache.tomcat.util.net.NioEndpoint$KeyAttachment.awaitLatch(NioEndpoint.java:1571) >> >> >> org.apache.tomcat.util.net.NioEndpoint$KeyAttachment.awaitReadLatch(NioEndpoint.java:1573) >> >> >> org.apache.tomcat.util.net.NioBlockingSelector.read(NioBlockingSelector.java:172) >> >> >> org.apache.tomcat.util.net.NioSelectorPool.read(NioSelectorPool.java:246) >> >> org.apache.tomcat.util.net.NioSelectorPool.read(NioSelectorPool.java:227) >> >> org.apache.coyote.ajp.AjpNioProcessor.readSocket(AjpNioProcessor.java:342) >> org.apache.coyote.ajp.AjpNioProcessor.read(AjpNioProcessor.java:314) >> >> org.apache.coyote.ajp.AjpNioProcessor.readMessage(AjpNioProcessor.java:406) >> org.apache.coyote.ajp.AjpNioProcessor.receive(AjpNioProcessor.java:375) >> >> org.apache.coyote.ajp.AbstractAjpProcessor.refillReadBuffer(AbstractAjpProcessor.java:616) >> >> >> org.apache.coyote.ajp.AbstractAjpProcessor$SocketInputBuffer.doRead(AbstractAjpProcessor.java:1070) >> >> org.apache.coyote.Request.doRead(Request.java:422) >> >> org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:290) >> >> org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:431) >> org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:315) >> >> org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:167) >> >> com.prosc.io.IOUtils.writeInputToOutput(IOUtils.java:49) >> com.prosc.io.IOUtils.inputStreamAsBytes(IOUtils.java:116) >> com.prosc.io.IOUtils.inputStreamAsString(IOUtils.java:136) >> com.prosc.io.IOUtils.inputStreamAsString(IOUtils.java:127) >> com.prosc.licensecheck.LicenseCheck.doPost(LicenseCheck.java:169) >> javax.servlet.http.HttpServlet.service(HttpServlet.java:647) >> javax.servlet.http.HttpServlet.service(HttpServlet.java:728) >> >> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) >> >> >> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) >> >> com.prosc.infrastructure.LogFilter.doFilter(LogFilter.java:22) >> >> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) >> >> >> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) >> >> >> com.prosc.infrastructure.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:38) >> >> >> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) >> >> >> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) >> >> >> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) >> >> >> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) >> >> >> org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) >> >> >> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) >> >> >> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) >> >> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) >> >> >> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) >> org.apache.coyote.ajp.AjpNioProcessor.process(AjpNioProcessor.java:184) >> >> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589) >> >> >> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1680) >> >> >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >> >> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >> >> java.lang.Thread.run(Thread.java:722) > > So it seems that the worker thread is blocking on something while it's trying > to read the POST data…? If my guess is correct, is this a problem that can be > solved by reconfiguring?
Correct. Even the NIO HTTP connector has to use blocking IO to read the request body and write the response. > This is a single core box (sorry, should have mentioned that in the > configuration details). Would you still expect increasing the worker thread > count to help? Yes. I'd return it to the default of 200 and let Tomcat manage the pool. It will increase/decrease the thread pool size as necessary. Depending on how long some clients take to send the data, you might need to increase the thread pool beyond 200. Mark --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org