Hi Bill,

On 11.05.2009 23:15, Bill Davidson wrote:
> I'm trying to understand mpm_worker MaxCLients and it's relationship
> with mod_jk connection_pool_size.
> 
> Here's what I've got at the moment:
> 
> OS: Red Hat 5.2 Server
> httpd: 2.2.11
> tomcat-connector: 1.2.28
> tomcat: 6.0.18
> Java: 1.6.0_13

Hey great, someone using recent version :)

> httpd-mpm.conf:
> 
> ListenBacklog      2048

Wow.

> <IfModule mpm_worker_module>
>    StartServers          2
>    MaxClients          256
>    MinSpareThreads      25
>    MaxSpareThreads      75
>    ThreadsPerChild      32

Usually MinSpaceThreads and MaxSpareThreads having a multiple of
ThreadsPerChild makes it easier understandable, what the numbers mean.
Scaling up and down is always done in increments of processes, each
having ThreadsPerChild threads.

MaxClients is the maximum number of concurrent connections allowed,
which is the same as the maximum number of threads used (for the worker
mpm).

>    MaxRequestsPerChild   0
> </IfModule>
> 
> workers.properties:
> 
> worker.tomcat1.type=ajp13
> worker.tomcat1.host=127.0.0.2
> worker.tomcat1.port=8009
> worker.tomcat1.connection_pool_size=150

Delete the connection_pool_size. Connection pools in mod_jk are local to
httpd processes. Each process can only use as many connections, as it
has threads to process concurrent requests. In your configuration this
is 32. mod_jk asks httpd when starting about this number and
automatically sets its pool size to the number of threads per process.
You'll never need more. Only if you have very good reasons, that you
want to lower this, you can configure a smaller number. Usually you
don't want to do this.

> worker.tomcat1.connection_pool_timeout=600

you need to set connectionTimeout for the Tomcat connector to 600000 then.

You might also want to set a minimum pool size, i.e. the smallest number
of connections th epool is allowed to shrink to, if it is idle. I would
suggest "0".

please do also have a look at the docs page on timeouts for mod_jk.

> worker.tomcat1.socket_keepalive=1
> 
> server.xml:
> 
> <Connector port="8009"
>               protocol="AJP/1.3"
>               address="127.0.0.2"
>               redirectPort="443"
>               maxThreads="150" />

The 150 threads do not make a good fit to your MaxClients of 256. If
your Apache is mainly forwarding requests to Tomcat, then it doesn't
make much sense to allow 256 parallel connections to Apace, but only 150
on the backend. That will result in some Apache processes being fully
connected (32 connections) and some other not able to grow their
connection pool to the full size, because they get errors when trying to
connect.

> Also, I've added these to /etc/sysctl.conf
> 
> # increase the maximum number of TCP connections.
> net.core.somaxconn = 2048
> net.core.netdev_max_backlog = 1024
> 
> I've got three separate boxes running their own httpd/Tomcat, load
> balanced with LVS so total connections is actually 3x of what is
> indicated above.

Are the Apaches connected to each Tomcat, or only to "their" Tomcat? If
you separate your design into 3 disjoint Apache/Tomcat pairs, then you
need to educate your LVS about correct session stickyness. If you think
you can't manage that, then add a load balancer worker to each Apache,
let each of them connect to all Tomcats, set the "distance" for the
local Tomcat to "0" and set it to "1" for the other two Tomcats. Finally
allow min pool size "0" and add some spare threads above MaxClients to
the Tomcats, because they need to handle connections from all three Apaches.

> I'm feeling like MaxClients is a bit low, but I can't seem to
> satisfactorily
> articulate why.

Expected concurrency = LoadInRequestsPerSecond * AverageResponseTime

> I'm going to be hit with a traffic storm (many thousands
> of simultaneous connection attempts in a few minutes) in a few days, and
> I'm thinking I should make sure I've got this right.

You need to do stress testing in order to find out, what the correct
sizing is. If your application can stand the load and is very
fast/lightweight, then you could manage more than 1000 requests/second
with three Tomcats without ever reaching 256 MaxClients per Apache. If
your application gets slow, then you might not be able to server 50
requests/second. Play around with the above formula.

> I went through it
> maybe
> 8-10 months ago but that was long before I put these systems into
> production.  I probably should have gone through it again a month ago but
> didn't think to.
> 
> I'm also contemplating increasing connection_pool_size & maxThreads
> (I'm pretty sure those have to be equal) since my database can handle
> up to 1000 simultaneous connections and as it is, I've got a maximum
> of 3*150=450 Tomcat threads that can access it at any given time.

more likely 3*256 = 768 which is close to the db max.

> Any useful advice on this would be appreciated.

It is more likely that increasing the allowed concurrency will make
things worse. Quite often the first bottleneck is not the allowed
concurrency, but things like database I/O (missing indexes, full table
scans), bad locking in applications, long response times from some other
backends, excessive logging, bad memory/GC configuration etc. Once
things get slow, you will obviously very quickly fill all the available
concurrency with new requests. If the rate of incoming requests is
higher than what you actually handle, you are (your app is) dead. You
can't heal that by allowing even more concurrency. Excess concurrency
can only help with response time increases, that only exist for a couple
of seconds.

Regards,

Rainer

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to