Rainer Jung wrote:
On 10.06.2009 22:57, Filip Hanik - Dev Lists wrote:
this is because apache a2 only has routes for td201 and td202... but
not td101... therefore it doesn't know how to handle td101.
why don't you setup all four routes for a1 and a2.
then use the mod_proxy_balancer lbset variable to set a preferred route,
and problem will be solved

Although this will work, it will need more connections
more connections true, but they timeout if idle
 and thus threads,
not really, since idle connections will be in a polling state if you are using APR or NIO. And if using BIO, then you set a timeout, and the threads will be released after the timeout.

because the routes use different connection pools.

You could also rewrite the jvmRoute in the session id:

http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html#Bind%20session%20after%20crash%20to%20failover%20node
sounds like a mod_jk feature, they are using proxy_balancer, is it available there?
Regards,

Rainer

Theparanoidone Theparanoidone wrote:
Greetings~

I would like to reuse the JSESSIONID route betweened clustered
tomcats... and I'm wondering if there are negative consequences for
doing so.

We have the following setup:
2 data centers...   (d1 / d2)
2 apache mod_proxy_balance    (ad1, ad2)
4 tomcat servers  (td101, td102, td201, td202)  --- sessions are
replicated between both data centers (fortunately our application is
light enough that this should be okay for our needs)

                  d1                           d2
                    |                             |
                 ad1                         ad2
                /     \                       /      \
td101 -- td102 -- -- td201 -- td202

Our clients are "stuck" to a particular tomcat server and data center
upon logging in; however, if we need to perform maintenance... we
switch everyone over to an "up" data center while we do maintenance on
the "down" center.

So, in normal operation... a client will always reconnect to td101. If
we flip to maintenance mode... they'll be redirected to d2... HOWEVER
our current...
While at d2... they will ping/pong in between td201 and td202... (this is because apache a2 only has routes for td201 and td202... but
not td101... therefore it doesn't know how to handle td101.

Our application still works... it's just messy flip flopping between 2
tomcat servers for every request.

I'm considering relabeling the routes as follows (td1, td2, td1, td2)

                  d1                           d2
                    |                             |
                 ad1                         ad2
                /     \                       /      \
td1 -- td2 -- -- td1 -- td2
Is there any weird route collisions or problems in doing this?
Do routes really have to be unique if our application controls which
physical data center a customer connects to?

Thanks!

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to