Hello Mladen,

I have two use case for the Multi Cluster Routing:

Use Case 1: More Scaling cluster
=========
A tomcat standard we replicated the session to all tomcat node at a cluster.
This replication strategie not scale very well, but when we split
the tomcat nodes to some domain and the lb knows that, the system scale better.


   Apache 1
           domain and cluster 1
                worker   w1.1 for T1.1
                worker   w1.2 for T1.2
                worker   w1.3 for T1.3

          domain and cluster 2
                worker   w2.1 for T2.1
                worker   w2.2 for T2.2
                worker   w2.3 for T2.3

When a worker w1 at domain 1 failed the made the next try to w1.2 and w1.3,
only this worker fail also, the balancer give the w2.x worker a chance. Ok, the client
lose the session but than you have really hardware problems...



Use Case 2: Switch smoothly to next software generation ( preferred domain)
========
Release a new software at runtime without drop the current user sessions.
Start complete new tomcat instances with new application release.
Say the Loadbalancer that all new sessions start to domain 2 workers,
but as you lost a worker at domain 1 (old release) switch to worker at same domain.


Apache 1
           domain and cluster 1  ( Software Release 1)
                worker   w1.1 for T1.1
                worker   w1.2 for T1.2
                worker   w1.3 for T1.3

          domain and cluster 2 ( Software Release 2) - preferred
                worker   w2.1 for T2.1
                worker   w2.2 for T2.2
                worker   w2.3 for T2.3

Both use case work perfect with the mod_jk2 level concept, with a little patch.
Only the level limit is a real problem for scaling well..


---
I check also the newest jk_lb_worker and it works fine for me...
The increment technic is simple and powerful :-)

Some examples

w1+ w2 lb_factor 1 lb_value 0
values after calc
request | w1 w2 | comment
1 | 0* 2 | w1 get the Session
2 | 0 0* | w2 get the Session
3 | 0 0* | w2 get the Session, w1 is in error state


w1 lb_factor 3 lb_value 0
w2 lb_factor 1 lb_value 0
values after calc
request | w1 w2 | comment
1 | -1* 1 | w1 get the Session
2 | -2* 2 | w1 get the Session
3 | -3 3 | w1 get the Session, w1 is first lb worker
4 | 0 0* | w2 get the Session
5 | -1* 1 | w1 get the Session
6 | -1 0* | w2 get the Session, w1 is in error
7 | -1 0* | w2 get the Session, w1 is in error
7 | -1* 1 | w1 get the Session, w1 recover


great.


regards peter


Mladen Turk schrieb:

Rainer Jung wrote:

I include my original posting.


Hi Rainer,

First of all thank you for ideas.
They are great!


1) Limiting new application sessions if load is to high.


There is a problem with that. I made a implementation counting the number of busy childs/threads from scoreboard (took me entire day), but again we should count the number of connections to tomcat, cause the apache might be serving static content. Anyhow the idea is great and I'll implement it in the new mod_proxy for Apache 2.2 where we have extra slots in the scoreboard. Sad but we can not do that inside mod_jk unless we implement our own shared memory, that was prover to be bogus in jk2.



2) Multi-Cluster-Routing


Can you write some use case for that and perhaps some simple algo too. What about sticky-sessions and forcing failower if we do not have session replication?


3) Idle connection disconnect


Use worker mpm. We just can not make maintainer thread for non-treaded mpm's like apache1.2 or prefork.



4) Open Problem

I didn't check your new code, but at least before there was the problem, that a recovered worker that was offline a long time (in means of load)


This should work now with the latest patches.


Best regards, Mladen.


--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to