Hi Roman,
my scenario is similar what you describe, but whit a little difference:
I have one or more contexts served by all of the nodes in the cluster,
and another contexts served only by the Master node.
The Master node manages the contexts deployed in HA Singleton.
Only one Master node is active in the cluster, and the cluster manages
the Master election.
When the Master node fails, the cluster elects a new Master node and
contexts being served by it.
Using mod_cluster each back-end node can advise contexts that are
serving to the apache, in this case the switch of Master Node is
automatically notified to the front-end.
I would like to implement a similar solution using ProxyPass, why this?
I think that problem "503" may be caused by the JBoss GC (stop the
world) activity.
When it occurs on HA Singleton MASTER node it makes the web-applications
not availables, and the node is considered "off-line" for 1 minute by
the apache.
No problem arises for "standard cluster applications running on all
nodes" because another node can responde.
I've seen that "retry=0" parameters using ProxyPass should avoid the 1
minute "black out" that generates the 503 errors.
I'm not able to find the same behavior using mod_cluster.
Federico
Il 04/07/2014 05:03, Roman Jurkov ha scritto:
Federico,
503 would be due to a node being not responsive, you can configure
your cluster to disable a node after one or more failures with
mod_cluster. now back to mod_proxy_balancer and mod_proxy, if i
understand correctly, you have a scenario where you have one or more
contexts served by all of the nodes in the cluster, and another
context served only by some nodes, i.e.:
Node1 => /foo,/bar
Node2 => /foo
Node3 => /foo,/bar
you want /foo to be served by all of the nodes, and /bar just by
Node1, and have Node3 as a hot standby (failover), if that is the
case, you would probably need multiple load balancers defined with two
different location blocks, so something like this:
<proxy balancer://foocluster>
BalancerMember http://node1:8080
BalancerMember http://node2:8080
BalancerMember http://node3:8080
</proxy>
<proxy balancer://barcluster>
BalancerMember http://node1:8080
# the hot standby on node3
BalancerMember http://node3:8080 status=+H
</proxy>
<location /foo>
ProxyPass balancer://foocluster
ProxyPassReverse http://node1:8080
ProxyPassReverse http://node2:8080
ProxyPassReverse http://node3:8080
</location>
<location /bar>
ProxyPass balancer://barcluster
ProxyPassReverse http://node1:8080
ProxyPassReverse http://node3:8080
</location>
i haven’t tested above configuration.
-Roman