JNeuhoff wrote:
Yes, it starts out with a much smaller memory, around 10 to 15 MB or so,
even after a few initial connections to Tomcat from one user session.


I see, any idea, how the delta 70-15=55MB relates to connections (if you do stress tests with real parallelity e.g. 20, 50, 100, 200, how does memory behave after stopping the load)?

I just downloaded the file (of the same name) from another mirror, and this
time it is really version 1.2.20, looks like the other mirror was corrupted
(same file name, old version).

No idea why, it's good you observed it.

Anyway, I made another interesting observation for both mod_jk 1.2.10 and
1.2.20: When I hammer my website rapidly with requests, by clicking too
quickly in my web browser on links within the same site, mod_jk eventually
becomes unable to receive responses from Tomcat, causing Apache2 to come
with standard Error 503 replies. I was using the workers.properties with
this:

# Define 1 real worker using ajp13
worker.list=ajp13
# Set properties for worker1 (ajp13)
worker.ajp13.type=ajp13
worker.ajp13.host=localhost
worker.ajp13.port=8009
worker.ajp13.connection_pool_timeout=600
worker.ajp13.connection_pool_minsize=10
worker.ajp13.connect_timeout=15000
worker.ajp13.prepost_timeout=10000

And only mod_jk.log had some error messages, first in increasing numbers
something like this:

....
[Thu Jan 18 17:10:20 2007] [3336:0792] [info]  jk_ajp_common.c (1410):
Writing to client aborted or client network problems
[Thu Jan 18 17:10:20 2007] [3336:0792] [info]  jk_ajp_common.c (1795):
(ajp13) request failed, because of client write error without recovery in
send loop attempt=0
....

These ones are OK, they are info level and they only tell you, that when mod_jk tried to send back answers, it found out, that the browser already closed the connection. That's true, because every reload during waiting for response in the browser will reset the browser connection to the web server. This will only be detected, when mod_jk tries to use the connection to either read parts of the request (this normally already succeeded when you did the reload) or it tries to write back parts of the result.

and finally more and more of the

....
[Thu Jan 18 17:10:59 2007] [3336:0188] [info]  mod_jk.c (2063): Service
error=0 for worker=ajp13
[Thu Jan 18 17:11:00 2007] [3336:1368] [error] jk_ajp_common.c (1015):
(ajp13) can't receive the response message from tomcat, network problems or
tomcat (127.0.0.1:8009) is down -53
[Thu Jan 18 17:11:00 2007] [3336:1368] [error] jk_ajp_common.c (1536):
(ajp13) Tomcat is down or refused connection. No response has been sent to
the client (yet)
[Thu Jan 18 17:11:00 2007] [3336:1368] [info]  jk_ajp_common.c (1828):
(ajp13) receiving from tomcat failed, recoverable operation attempt=0
[Thu Jan 18 17:11:00 2007] [3336:1368] [info]  jk_ajp_common.c (1867):
(ajp13) sending request to tomcat failed,  recoverable operation attempt=1
[Thu Jan 18 17:11:00 2007] [3336:0216] [error] jk_ajp_common.c (947):
(ajp13) can't receive the response message from tomcat, network problems or
tomcat is down (127.0.0.1:8009), err=-53
[Thu Jan 18 17:11:00 2007] [3336:0216] [error] jk_ajp_common.c (1562):
(ajp13) Tomcat is down or network problems. Part of the response has already
been sent to the client
[Thu Jan 18 17:11:00 2007] [3336:0216] [info]  jk_ajp_common.c (1828):
(ajp13) receiving from tomcat failed, recoverable operation attempt=1
[Thu Jan 18 17:11:00 2007] [3336:0216] [info]  jk_ajp_common.c (1867):
(ajp13) sending request to tomcat failed,  recoverable operation attempt=2
[Thu Jan 18 17:11:00 2007] [3336:0216] [error] jk_ajp_common.c (1879):
(ajp13) Connecting to tomcat failed. Tomcat is probably not started or is
listening on the wrong port
....


Those err=-53 are bad, 53 on Windows is connection aborted. You should find something about them in the tomcat logs.

Only a server re-boot cleared up the connections (there were only 2 of them
established).
Is it possible that mod_jk can't cope with queueing too many rapidly
incoming requests? Admittedly, this doesn't happen from normal users, but a
malicious person would be able to bring down the web service this way. Are
there preventitive settings for this scenario?

It's not really a problem of rapidly incoming, but you demand more throughput, than the system can provide. If you click 100 times a second (not waiting for replies) and your system can only deliver 50 responses per second, then you will queue up unfinished requests (50 after 1 second, 100 after 2, 150 after 3 etc.) Each unfinished request needs a separate Thread in Apache and in Tomcat, o since you only provide 250 of them, you can saturate the system quite fast (again assuming your throughput is lower than the request injection rate).

If you want to prevent denial of service, you need to do that on a low protocol level. There are a couple of Apache modules, but more likely you'll use some sort of security appliance. But that would be better discussed in a separate mail thread.

Your statement about 2 connections left but nevertheless a server reboot was needed is unclear to me. I would expect, that at most a tomcat restart would be needed. Could be, that you'll find an OutOfMemoryError in your tomcat logs (because of the huge amount of parallel unfinished requests), which might have sent tomcat to an undefined state.


Regards

J.Neuhoff


Regards,

Rainer

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to