Am 27.08.20 um 19:35 schrieb Christopher Schultz: > David, > > On 8/27/20 10:48, David wrote: > > In the last two weeks I've had two occurrences where a single > > CentOS 7 production server hosting a public webpage has become > > unresponsive. The first time, all 300 available > > "https-jsse-nio-8443" threads were consumed, with the max age being > > around 45minutes, and all in a "S" status. This time all 300 were > > consumed in "S" status with the oldest being around ~16minutes. A > > restart of Tomcat on both occasions freed these threads and the > > website became responsive again. The connections are post/get > > methods which shouldn't take very long at all. > > > CPU/MEM/JVM all appear to be within normal operating limits. I've > > not had much luck searching for articles for this behavior nor > > finding remedies. The default timeout values are used in both > > Tomcat and in the applications that run within as far as I can > > tell. Hopefully someone will have some insight on why the behavior > > could be occurring, why isn't Tomcat killing the connections? Even > > in a RST/ACK status, shouldn't Tomcat terminate the connection > > without an ACK from the client after the default timeout? > > Can you please post: > > 1. Complete Tomcat version > 2. Connector configuration (possibly redacted) > > > Is there a graceful way to script the termination of threads in > > case Tomcat isn't able to for whatever reason? > > Not really.
(First look at Marks response on determining the root cause) Well, there might be a way (if it is sane, I don't know). You can configure a valve to look for seemingly stuck threads and try to interrupt them: http://tomcat.apache.org/tomcat-9.0-doc/config/valve.html#Stuck_Thread_Detection_Valve There are a few caveats there. First it is only working, when both conditions are true * the servlets are synchronous * the stuck thread can be "freed" with an Interrupt But really, if your threads are stuck for more than 15 minutes, you have ample of time to take a thread dump and hopefully find the root cause, so that you don't need this valve. Felix > > > My research for killing threads results in system threads or > > application threads, not Tomcat Connector connection threads, so > > I'm not sure if this is even viable. I'm also looking into ways to > > terminate these aged sessions via the F5. At this time I'm open to > > any suggestions that would be able to automate a resolution to > > keep the system from experiencing downtime, or for any insight on > > where to look for a root cause. Thanks in advance for any guidance > > you can lend. > It might actually be the F5 itself, especially if it opens up a large > number of connections to Tomcat and then tries to open additional ones > for some reason. If it opens 300 connections (which are then e.g. > leaked by the F5 internally) but the 301st is refused, then your > server is essentially inert from that point forward. > > NIO connectors default to max 10k connections so that's not likely the > actual problem, here, but it could be for some configurations. > > Do you have a single F5 or a group of them? > > -chris > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org