Thanks,
"stopped" is a very good name!
Peter
Georg v. Zezschwitz schrieb:
On Tue, Apr 26, 2005 at 01:10:13PM +0200, Peter Rossbach wrote:
I name the flag deactived.
Sorry for a lurkers comment from the background (and I am neither
a native speaker).
But I guess it should be named:
"deactiv
Hey Malden,
Mladen Turk schrieb:
Peter Rossbach wrote:
I name the flag deactived.
Look, can we prolong that to the next release?
I would really appreciate that, because the 1.2.11
should be a bug-fix release.
Changing that would break the current configurations,
and IMO we could think of something
Peter Rossbach wrote:
I name the flag deactived.
Look, can we prolong that to the next release?
I would really appreciate that, because the 1.2.11
should be a bug-fix release.
Changing that would break the current configurations,
and IMO we could think of something smarter in the future.
Adding an
On Tue, Apr 26, 2005 at 01:10:13PM +0200, Peter Rossbach wrote:
> I name the flag deactived.
Sorry for a lurkers comment from the background (and I am neither
a native speaker).
But I guess it should be named:
"deactivated" not "deactived"
Also, based on Mladens points, what about a more stri
Mladen Turk schrieb:
Peter Rossbach wrote:
Are you sure this is absolute necessity?
But at cluster the session are replicated, and we must not wait.
Please, can we
add a flag active=false/true to test my idea. What do you thing I can
start
a quick experiment and send you the diffs for reviewing?
Peter Rossbach wrote:
Are you sure this is absolute necessity?
But at cluster the session are replicated, and we must not wait. Please,
can we
add a flag active=false/true to test my idea. What do you thing I can start
a quick experiment and send you the diffs for reviewing?
Well I'm -0 on the su
Hey Mladen,
I have made successfull a test implementation to deactived a worker. I
works very fine for me at windows.
Now made a second test under apache 2.0.54 and Linux 9.3 with worker and
prefork MPM.
I think in two hour I am ready to checkin the change. :-)
Peter
Peter Rossbach schrieb:
Hey
Hey Mladen,
Mladen Turk schrieb:
Peter Rossbach wrote:
Hey Mladen,
I used the tomcat at cluster mode, but your answer is a little bit to
easy.
Sticky session is the only way for the most applications to be
consistens.
Session replication is only a secondary feature when failure occured.
Why w
Peter Rossbach wrote:
Hey Mladen,
I used the tomcat at cluster mode, but your answer is a little bit to easy.
Sticky session is the only way for the most applications to be consistens.
Session replication is only a secondary feature when failure occured.
Why we can't add a flag that deactive a wo
Hey Mladen,
I used the tomcat at cluster mode, but your answer is a little bit to easy.
Sticky session is the only way for the most applications to be consistens.
Session replication is only a secondary feature when failure occured.
Why we can't add a flag that deactive a worker or change the sem
Mladen Turk wrote:
Peter Rossbach wrote:
Hmm, that disabling feature not work at my configuration.
worker.node2.domain=A
You don't need a domain unless you have a session replication
Or you can just set jvmRoute="A" on each tomcat instance,
make session replication, and then your config will work i
Peter Rossbach wrote:
Hmm, that disabling feature not work at my configuration.
worker.node2.domain=A
You don't need a domain unless you have a session replication
What is wrong at my mod_jk configuration?
worker.xxx.sticky_session=false
All sticky sessions will need to timeout, and
then you can re
Hmm, that disabling feature not work at my configuration.
I have made a test with current Tomcat head and mod_jk 1.2.10.
===
worker.list=lb,status
worker.node1.port=9012
worker.node1.host=127.0.0.1
worker.node1.type=ajp13
worker.node1.cachesize=200
worker.node1.cache_timeout=60
worker.node1.disable
Peter Rossbach wrote:
Hello Mladen ,
What I want is, that we can stop request sending from mod_jk side
at a single node. Why we can't stop all requests for a worker, when we
set a flag worker.node1.active=false ? Ok, then no application
on this node get request, but we can control the tomcat restar
Hello Mladen ,
yes Remy, we have currently no chance inside tomcat. :-(
My szenario is really important when automatic alive monitoring
detect that inside an application or node is something wrong (Out Of
Memory Exception,
all thread hang, detect a deadlock or other nice application relevant
bug
Mladen Turk wrote:
I think that this is the Tomcat responsibility.
If inside redeployment, it should hold the request until finished.
For mod_jk you can set the socket_timeout that will cause the
failover to another worker (if redeployment takes more then 60 seconds).
There is no way to hold a part
Peter Rossbach wrote:
Goal
- restart an application at single node
Wow.
Probleme
- How can configure that lb stop traffic for a spezial worker/node and
for an single application?
I think that this is the Tomcat responsibility.
If inside redeployment, it should hold the request until finished.
For
17 matches
Mail list logo