Ayub,
On 11/11/20 16:16, Ayub Khan wrote:
I was load testing using the ec2 load balancer dns. I have increased the
connector timeout to 6000 and also gave 32gig to the JVM of tomcat. I am
not seeing connection timeout in nginx logs now. No errors in kernel.log I
am not seeing any errors in tomcat catalina.out.
The timeouts are most likely related to the connection timeout (and
therefore keepalive) setting. If you are proxying connections from nginx
and they should be staying open, you should really never be experiencing
a timeout between nginx and Tomcat.
During regular operations when the request count is between 4 to 6k
requests per minute the open files count for the tomcat process is between
200 to 350. Responses from tomcat are within 5 seconds.
Good.
If the requests count goes beyond 6.5 k open files slowly move up to 2300
to 3000 and the request responses from tomcat become slow.
This is pretty important, here. You are measuring two things:
1. Rise in file descriptor count
2. Application slowness
You are assuming that #1 is causing #2. It's entirely possible that #2
is causing #1.
The real question is "why is the application slowing down". Do you see
CPU spikes? If not, check your db connections.
If your db connection pool is fully-utilized (no more available), then
you may have lots of request processing threads sitting there waiting on
db connections. You'd see a rise in incoming connections (waiting) which
aren't making any progress, and the application seems to "slow down",
and there is a snowball effect where more requests means more waiting,
and therefore more slowness. This would manifest as sloe response times
without any CPU spike.
You could also have a slow database and/or some other resource such as a
downstream web service.
I would investigate those options before trying to prove that fds don't
scale on JVM or Linux (because they likely DO scale quite well).
I am not concerned about high open files as I do not see any errors related
to open files. Only side effect of open files going above 700 is the
response from tomcat is slow. I checked if this is caused from elastic
search, aws cloud watch shows elastic search response is within 5
milliseconds.
what might be the reason that when the open files goes beyond 600, it slows
down the response time for tomcat. I tried with tomcat 9 and it's the same
behavior
You might want to add some debug logging to your application when
getting ready to contact e.g. a database or remote service. Something like:
[timestamp] [thread-id] DEBUG Making call to X
[timestamp] [thread-id] DEBUG Completed call to X
or
[timestamp] [thread-id] DEBUG Call to X took [duration]ms
Then have a look at all those logs when the applications slows down and
see if you can observe a significant jump in the time-to-complete those
operations.
Hope that helps,
-chris
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org