hi Brian,
[EMAIL PROTECTED] wrote:
All,
Good day!
For a little background, I'm working on implementing "sub-clusters" with
the new connector. That is, I will cluster my Tomcats in paris to keep
network traffic/memory usage down. The connector would preferrentially
fail-over between sub-cluster Tomcat pairs. But if both elements of a
Tomcat cluster go down, I'd like the connector to fail-over to another
sub-cluster (with session loss).
I realize that the connector "Domain" directive was meant for this. My
reading of the docs, however, would imply that since the jvmRoute is the
domain (with sticky sessions), the requests going into a domain would be
load-balanced between domain memembers.
jvmRoute -> worker.xxx.route
domain influences failover decision, as you want it.
This would in turn imply (for
those of us who are paranoid), that we'd have to do synchronous session
replication at the Tomcat level to *ensure* that under heavy load
sessions would be replicated before a user is balanced between cluster
members. This would then, potentially, slow responses to the client
since sending the response would be delayed until session replication is
complete. (Perhaps I should try everything to confirm all this since the
docs are a wee bit vagues...) So, I thought I'd be sneaky and try other
directives. Hence:
You can set jvmRoute to the worker name (or as you suggest the worker
route), use session stickyness and relate differrent workers by using
the same domain name.
The if all workers are up, stickyness will be effective, if a worker
dies, JK will choose another worker in the same domain, and only if the
whole domain is dead choose any other worker.
I'm pretty sure, that you can simulate this somehow with "redirect", but
the domain concept also works, once you increase the domain size. At the
end both concepts are very similar.
I am trying to use JK 1.2.25 to access Tomcat 6. I am setting up 4
servers paired in two different clusters at the Tomcat level. In the
workers.properties I have:
- 4 workers: w1,w2,w3,w4
- All 4 workers are sub-workers of a load balancer called "router"
- The 4 workers are configured as two "pairs" where the worker pairs
preferrentially fail-over to each other using the "redirect" directive.
- In order to prepare for some more exotic configuration, I have used
the "route" directive to de-couple the worker name from the
route/jvmRoute which I use for sticky load balancing.
- The exotic config will be to define multiple load balancers so I can
switch them out at runtime via the url=>worker mapping file (since one
can't reload the workers.properties file at runtime, which is a
bummer!).
You can do "apachectl graceful". Switching via uriworkermap.properties
should be fine though.
I have found that the "redirect" directive is not working all the time.
If w1 and w2 are set to preferentially fail-over to each other, it does
not always happen. For example, if I kill the Tomcat w1 is pointing to,
the connector will at times fail-over to the Tomcat w4 is pointing to.
If I remove the "route" directive from all workers and ensure the worker
names match the jvmRoute in Tomcat, all is well - I don't get the
incorrect fail-overs. It seems then the "redirect" pairs always failover
to each other.
Note that the problem seems to appear after I do a failover that behaves
properly once, restart the failed servers, and try another failover
test.
Is my config incorrect? Do the "route" and "redirect" directives not
play well with each other?
I've tried this on Windoze (see config below) with all Tomcats on my
desktop. I've also tried in on Linux with the Tomcats on 4 different
servers. Same behaviour.
Thanks very much in advance!
My workers.properties is as follows:
### Global worker maintenance interval in seconds
worker.maintain=30
###
### The list of all workers
###
worker.list=router
###
### The real workers
###
# Set w1 properties
worker.w1.socket_keepalive=1
worker.w1.socket_timeout=20
worker.w1.reply_timeout=20000
worker.w1.retries=2
worker.w1.connection_pool_timeout=60
worker.w1.type=ajp13
worker.w1.host=localhost
worker.w1.port=8031
worker.w1.lbfactor=1
worker.w1.redirect=w2
or better instead of the redirect: worker.w1.domain=tc12
worker.w1.route=tc1
# Set w2 properties
worker.w2.socket_keepalive=1
worker.w2.socket_timeout=20
worker.w2.reply_timeout=20000
worker.w2.retries=2
worker.w2.connection_pool_timeout=60
worker.w2.type=ajp13
worker.w2.host=localhost
worker.w2.port=8032
worker.w2.lbfactor=1
worker.w2.redirect=w1
or better instead of the redirect: worker.w2.domain=tc12
worker.w2.route=tc2
# Set w3 properties
worker.w3.socket_keepalive=1
worker.w3.socket_timeout=20
worker.w3.reply_timeout=20000
worker.w3.retries=2
worker.w3.connection_pool_timeout=60
worker.w3.type=ajp13
worker.w3.host=localhost
worker.w3.port=8033
worker.w3.lbfactor=1
worker.w3.redirect=w4
or better instead of the redirect: worker.w3.domain=tc34
worker.w3.route=tc3
# Set w4 properties
worker.w4.socket_keepalive=1
worker.w4.socket_timeout=20
worker.w4.reply_timeout=20000
worker.w4.retries=2
worker.w4.connection_pool_timeout=60
worker.w4.type=ajp13
worker.w4.host=localhost
worker.w4.port=8034
worker.w4.lbfactor=1
worker.w4.redirect=w3
or better instead of the redirect: worker.w4.domain=tc34
worker.w4.route=tc4
# router is a load balancer
worker.router.type=lb
worker.router.balance_workers=w1,w2,w3,w4
worker.router.sticky_session=True
worker.router.sticky_session_force=False
worker.router.method=S
worker.router.recover_time=30
You can choose other names as domain names, as long as the correct pairs
share the same name.
###
### The status worker
###
worker.list=jkstatus
worker.jkstatus.type=status
Brian D. Horblit
Senior Principal Engineer
Thomson Healthcare
(303) 486-6697
(800) 525-9083 x 6697
www.thomsonhealthcare.com <http://www.thomsonhealthcare.com/>
[EMAIL PROTECTED]
Regards,
Rainer
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]