Justin, are you saying that there is a command(s) that would make the
client automatically fail over to a different broker in the situation Anton
described, if that command(s) is executed before the scale-in occurs?
AWS has a feature called ASG lifecycle hooks that allow a host to be
notified (and
Scaling down is different. It is an intentional, graceful procedure
oftentimes initiated administratively. The broker scaling down will
explicitly create the queues on the target broker if they don't exist. Then
it will transfer all message and transaction data to the target broker.
This is like dy
Hmm, I might have the semantics mixed up here but... How does clients behave
in a dynamically scaling performance cluster then? Clients that join the
cluster during high load can't possibly stop working after the broker they
originally connected to scales down and stops, right?
Doesn't a "failove
It was a design choice for at least the following reasons (there may be
more reasons I'm not recalling at the moment). Every node in a cluster has
its own set of messages. If a client is working with a particular message
or group of messages then when the master fails and the client fails over
to t
Why is that, is there some design choice behind it or could this be
implemented as a feature?
If not, is there some solution to failing over to other active nodes with
core clients, or should this just be done with, for instance, an openwire
client using the failover protocol?
Br,
Anton
--
Se
You're seeing the expected behavior. Core clients only fail-over to a
backup server.
Justin
On Tue, Apr 7, 2020 at 7:52 AM Jarek Przygódzki
wrote:
> I tested this on cluster created by
> examples\features\clustered\symmetric-cluster example.
>
>
> It seems that client connected to symmetric cl
I tested this on cluster created by
examples\features\clustered\symmetric-cluster example.
It seems that client connected to symmetric cluster never automatically
failover to different node.
--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
I was just suggesting to check your cluster configuration against the
reference examples, to make sure the nodes are actually working as a group.
Could it be something as simple as the "ha" URI parameter being
case-sensitive? In all the examples it's in lower case, not uppercase as
you have it.
I don't think initial server discovery is the problem. Event if more servers
are expliclitly listed the client never switches from the server it
initially successfully connected to.
var cf = new ActiveMQConnectionFactory(
"(tcp://wildfly-01.domain:8080,tcp://wildfly-02.domain:8
Hi
Have you checked out the various examples for discovery here -
https://github.com/apache/activemq-artemis/tree/master/examples/features/clustered
?
Dave
On Thu, Feb 27, 2020 at 8:18 AM Jarek Przygódzki
wrote:
> Hi,
> I've been experimenting with Artemis cluster. Apparently remote clien
10 matches
Mail list logo