> > Yes, it starts out with a much smaller memory, around 10 to 15 MB or so, > > even after a few initial connections to Tomcat from one user session. > > > I see, any idea, how the delta 70-15=55MB relates to connections (if you > do stress tests with real parallelity e.g. 20, 50, 100, 200, how does > memory behave after stopping the load)?
It seems to grow linearly with the increasing number of connections. I'd have to run different setups in the stresstest to see whether the amount of data traffic over a single connection has a similar effect. > These ones are OK, they are info level and they only tell you, that when > mod_jk tried to send back answers, it found out, that the browser > already closed the connection. That's true, because every reload during > waiting for response in the browser will reset the browser connection to > the web server. This will only be detected, when mod_jk tries to use the > connection to either read parts of the request (this normally already > succeeded when you did the reload) or it tries to write back parts of > the result. Is this also the reason for an occasional rapid increase in the number of HTTPSessions? I noticed that at times Tomcat reports 3 to 4 times more active sessions than there were users (as defined by their IP-address and web browser's user agent string). > Those err=-53 are bad, 53 on Windows is connection aborted. You should > find something about them in the tomcat logs. I checked all the usual log files (stdout*.log, stderr*.log, localhost*.log,mod_jk.log) from Tomcat as well as the access.log from Apache. The only one with error messages was in mod_jk.log. There were no excessive numbers of connections, in fact just 2 of them! I will run some more stresstests to find out more. Would a socket_timeout help flush and/or release a blocked connection? I am also thinking about setting the connection_pool_minsize to zero. > Your statement about 2 connections left but nevertheless a server reboot > was needed is unclear to me. I would expect, that at most a tomcat > restart would be needed. Could be, that you'll find an OutOfMemoryError > in your tomcat logs (because of the huge amount of parallel unfinished > requests), which might have sent tomcat to an undefined state. There were no OutOfMemoryError in Tomcat, in fact its memory usage was never more than 150MB, its JRE is configured to a maximum 512MB. After the traffic surge was over, the Apache/mod_jk/Tomcat never recovered, not even after the HTTPSesison idle timeout (set to 15 minutes). I'll try and reproduce the same error condition later on today. Regards Juergen Neuhoff -- View this message in context: http://www.nabble.com/Apache-mod_jk-memory-leak--tf3023318.html#a8446606 Sent from the Tomcat - User mailing list archive at Nabble.com. --------------------------------------------------------------------- To start a new topic, e-mail: users@tomcat.apache.org To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]