You should try 1.2.18 and depending on your time frame update to 1.2.19
once it's being released this month.
We improved load balancing code and with 1.2.19 also the observability
of what's happening.
Try the alternative method B (Busyness) for the load balancer in 1.2.18.
The default method tri
it is a mod_jk issue, it uses permanent connections, that is how it was
designed. setting MaxRequestsPerClient to 1, will kill the child, hence
kill the mod_jk connection, this way, you can have
maxProcessors
Filip
Edoardo Causarano wrote:
Using mpm_worker gave less impressive results; I'd sa
Using mpm_worker gave less impressive results; I'd say about 1/2, a
much worse load average (way more than 5), and lots of swap. Seems
like prefork works better on linux and I'm surprised. Anyway,
assuming that I got the maxProcessors wrong I should have seen queues
building up @ 150*4 inst
since you are using prefork, you must set cachesize=1 for your
workers.properties file.
However, you have 4096 MaxClients, in order to serve this up in tomcat,
your JK connector should have maxProcessors="4096".
An alternative, and safe solution, although much less performance, is to
set "MaxReq
Hello List,
scenario:
- 4 node tc 5.0.28 vertical cluster ( :-| same server... still
testing, but it could have been 8) listening on ajp
protocol="AJP/1.3"
protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"
redirectPort="8443">
- 1 httpd 2.0.52 with mod_ajp 1.2.15 an