On 18.02.2009 19:14, Anthony J. Biacco wrote:
1. I'm running with ThreadsPerChild at 25 and MaxClients at 500. So, I
understand this to mean my connection pool minimum size will be a total
of 13. (25+1/2).

Yes, per Apache httpd process!

And then that this will be divided between my 4 tomcat
backends fairly equally.

It's a pool configuration, not a usage number. Each backend has a pool in each httpd process. Not sure what you mean by "this will be divided".

And as I look at my jkstatus page, I see the 13
for "Max busy", but for each backend it lists a "Max" of 3,4,5, and 5
respectively.

"Max Busy" is shown in the LB worker, "Max" in the AJP or LB member workers.

For an AJP worker (or an LB member)

Busy means:

- how many requests are handled in parallel by this worker at one point in time (the moment you look at the status page). Requests handled in parallel means send to the backend, but not yet received an answer. This number can not be bigger than the number of established connections in the pool, but it can be smaller in case some connections are idle.

Max means:

- the maximum of of Busy since the last reset of the JK statistics. So in your case, since the last statistics reset, there were never more than 3, 4, 5 resp 5 requests running in parallel on the respective backend, at least not requests coming from this Apache instance.

Max Busy for an LB is the maximum of the sums of all Busy values. So "MAX Busy" (LB) is always smaller or equal to the sum of "Max" for all its members, and very likely it is smaller, because the members usually will not have reached their Max in the same moment.

The number 13 as Max busy and as the minimum pool size is totally independent.

My question is this.. for the min size of "13" why is it
labeled in jkstatus as "Max Busy" (and the distributed connection pool
for each worker under "Max"). If this is actually the connection MIN
size, isn't the label inaccurate?

See above.

If it is really the Max size instead of the Min size, does that mean
it's the Max per httpd process or across all processes? I would THINK
per process, since the ThreadsPerChild are per process. So, for my total
of 13 above, if I have 4 httpd processes running, is my pool total for
all backends 13*4 (52), or 13?

The pool is per process, the jkstatus runtime statistics is per instance, so accumulated over all httpd processes (using shared memory).

I was hoping that my max connections per backend were my (MaxClients -
number of thread in reserve for apache static)/number of tomcat
backends. So if I wanted 100 threads for apache static and had 4
backends, I'd have (500 - 100)/4 = 400/4 = 100 max pooled connections
per backend. Is this not a viable way to run things? My maxThreads in
the tomcat backends are set to 400.

The max connections is equal to MaxClients. You will only reach the maximum connection number, when things get slow, or load gets unexpectedly high.

Concurrency (connections) = Throughput * Response Time

2. In tomcat, It seems that my threads keep increasing, though they're
not used. I thought that the connectTimeout in tomcat and the
connect_timeout/ping_timeout in mod_jk would stop this by closing idle
threads, but it does not. Eventually the threads in tomcat with reach

Closing idle connections, thus freeing threads and returning them to the thread pool. New connections should reuse those threads.

the max of 400 and stay there until tomcat is restarted. Is there a way
to resolve this? And more importantly, should I resolve it? Is there any
major memory/CPU inplications to it keeping its  threads at the max?

Do a thread dump "kill -QUIT". It goes to catalina.out and will tell you, what all those 400 threads are doing. Maybe they are stuck working on old requests nobody is waiting for.

3. What are the benfits to configuring mod_jk with --enable-EAPI and for
what circumstances should this be used?

Extended API: Used for Apache httpd 1.3 when building it with mod_ssl. Then the httpd API is different and mod_jk needs to adopt to it.

Not used at all in combination with httpd 2.x.

Relevant configurations follow..
Thanx a lot.

APACHE:
<IfModule mpm_worker_module>
     ServerLimit         20
     StartServers        3
     MaxClients          500
     MinSpareThreads     50
     MaxSpareThreads     125
     ThreadsPerChild     25
     MaxRequestsPerChild 200000
</IfModule>
JkWatchdogInterval 60
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories


Please drop +ForwardURICompat is you are not absolutetly sure that you need it.

MOD_JK:
worker.template.port=8009
worker.template.type=ajp13
worker.template.lbfactor=1
worker.template.connection_pool_timeout=120

Choose 60, because of the below connectionTimeout 60000 in the Tomcat connector.

worker.template.reply_timeout=20000

When using such an ambitious reply_timeout, also use max_reply_timeouts.

worker.template.socket_timeout=20

I'm not in favour of socket_timeout, but others are :)

worker.template.socket_connect_timeout=5000
worker.template.ping_mode=A
worker.template.ping_timeout=25000

Could be somewhat shorter, like 5 or 10 seconds instead of 25.
Except when you experience very long Garbage Collections.

worker.loadbalancer.sticky_session=1
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=tr1,tr2,tr3,tr4

TOMCAT:
     <Connector port="8009" protocol="AJP/1.3"
         maxThreads="400" backlog="25" maxPostSize="4194304"
         enableLookups="false" connectionTimeout="60000"
         redirectPort="8443" />

Try maxSpareThreads and minSpareThreads.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to