Hello, we've been performing some load tests (using Apache JMeter) and noticed that one HTTP Server child process become effectively blocked out (accepting = no) with 67 Async connections stuck in the closing state (see the server status output below).
It's been like that for several hours already and hasn't cleared yet (I intentionally left it in that state to see if it would eventually clear out). At the OS level there are no TCP connections (to this web server) visible at all, everything has been fully closed. Does anyone have any idea what may cause that - whether it's a HTTP Server bug or some weird misconfiguration issue? OS is CentOS 8 fully updated, with httpd installed from the standard repository. Server status: Server Version: Apache/2.4.37 (centos) Server MPM: event Server Built: Dec 23 2019 20:45:34 Current Time: Wednesday, 29-Apr-2020 19:24:53 CEST Restart Time: Wednesday, 29-Apr-2020 01:01:20 CEST Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 18 hours 23 minutes 32 seconds Server load: 0.04 0.26 0.18 1 requests currently being processed, 999 idle workers Slot PID Stopping Connections Threads Async connections total accepting busy idle writing keep-alive closing 0 10301 no 0 yes 0 200 0 0 0 1 10805 no 0 yes 0 200 0 0 0 2 25559 no 67 no 0 200 0 0 67 3 26063 no 1 yes 1 199 0 0 0 4 10806 no 0 yes 0 200 0 0 0 Sum 5 0 68 1 999 0 0 67 Some related config: MaxKeepAliveRequests 1000 KeepAliveTimeout 5 ThreadLimit 200 ServerLimit 5 StartServers 1 MaxRequestWorkers 1000 MinSpareThreads 40 MaxSpareThreads 240 ThreadsPerChild 200 MaxConnectionsPerChild 0 AsyncRequestWorkerFactor 1 (I had to lower the AsyncRequestWorkerFactor from the default value (2) because of issues with server closing some keep-alive connections prematurely in some highly variable workload cases, causing issues with Java clients using the Apache Http Client which doesn't seem to handle this well.) Best regards, Petr