Hello Chris,

> No, the webapp is selected first, then the path is trimmed (if
> necessary) and then the longest-match wins when matching against
> url-patterns configured in that webapp's web.xml.
...
> Sorry, longest match wins for URI matching once the webapp has been
selected.

Makes sense.  I wanted to make sure I was following the logic correctly.


> There was a recent bugfix to TC 7[1] to fix something
> related, but that was during redeployment and I suspect that if the
> webapp is stopped you'd get some other behavior.

I think I read about that bug fix in TC7.  I believe it changed threads to
paused when a context was reload/restarting or something like that;
instead of sending a 404 immediately, if I recall correctly.

> I tend not to stop
> webapps so I've never bothered to play around with it.

The two scenarios that lead me to all of this in the first place were dead
applications (crashed web apps) and situations where a WAR is experiencing
problems and must be stopped for some period of time (possibly because
back-end resources are unavailable or whatever).  The former is, sadly,
more frequent than the latter.


> No, this is definitely the place to have this discussion.

Here's a little back-story to help understand my approach:
I have a few web servers (apache httpd) sitting in front of a handful of
application servers.  The web servers are currently configured to use a
single Proxy Balancer with a few Balancer Members for each "cluster" of
tomcat servers.  Tomcat has, of course, been configured to replicate all
sessions in each cluster.  I can drop app nodes left and right and as long
as 1 is still up, requests still get serviced.  The problem here, is that
if a single server has an application crash Apache will continue to send
requests through that Balancer Member resulting in intermittent "404"s for
people.

Here's where the "magic" I am attempting to setup comes in...  Assuming
Tomcat will return HTTP 503 for a crashed/stopped application, I can tell
Apache to "failonstatus" 503, which will put the worker thread for the
Balancer Member into the error state for a while, thereby preventing that
server from being used.  The problem this causes is, if even 1 application
crashes or needs to be stopped on all servers, then all servers in the
cluster will be marked unavailable by the BalancerMember thread and no
other apps in the cluster will serve requests.  To fix this, I take it one
step further, by creating an entire Proxy Balancer for *each* web
application and the Balancer Members are now on a per-context basis, so to
speak.  So when /foo crashes on "tomcat1" the BalancerMember entering the
"error" state only affects requests to the foo context on the "tomcat1"
server.

I haven't seen anyone set it up like this exactly, but it sure seems like a
really, really good way to achieve High Availability on a granular level.
And as I read through the apache httpd docs and learned about Tomcat's
clustering, I figured this was all an intentional design philosophy for
others to follow.  So I was certainly confused when I saw Tomcat returning
404 for things it could "find", but were "unavailable".  I had one person
report that WebSphere returns 503 for applications turned "off", and I read
an old article that JBoss does the same; I have no direct proof of either.
So I'm wondering if they've tried to do with their packages what I'm trying
to accomplish with Apache httpd and tomcat.

Thoughts?

Kyle Harper

This communication and any attachments are confidential, protected by 
Communications Privacy Act 18 USCS ยง 2510, solely for the use of the intended 
recipient, and may contain legally privileged material. If you are not the 
intended recipient, please return or destroy it immediately. Thank you.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to