Rainer Jung wrote:
On 12.06.2009 02:57, Filip Hanik - Dev Lists wrote:
Rainer Jung wrote:
On 10.06.2009 22:57, Filip Hanik - Dev Lists wrote:
this is because apache a2 only has routes for td201 and td202... but
not td101... therefore it doesn't know how to handle td101.
why don't you setup all four routes for a1 and a2.
then use the mod_proxy_balancer lbset variable to set a preferred route,
and problem will be solved
Although this will work, it will need more connections
more connections true, but they timeout if idle
and thus threads,
not really, since idle connections will be in a polling state if you are
using APR or NIO.
And if using BIO, then you set a timeout, and the threads will be
released after the timeout.
because the routes use different connection pools.
You could also rewrite the jvmRoute in the session id:
http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html#Bind%20session%20after%20crash%20to%20failover%20node
sounds like a mod_jk feature, they are using proxy_balancer, is it
available there?
I thought it's only based on cluster, jvmRoute and session IDs. How
should mod_jk come into play here?
just unclear documentation. also lbset wouldn't yield more connections
either, since lbset is a priority. from what I understand, lbset=1 wont
even open a connection as long as lbset=0 is available. (when you have
min connections set to 0)
Yes, they have to set the jvmRoute, and mod_proxy_balancer has to use
it. But apart from that?
you're right
Regards,
Rainer
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org