I've been experimenting with this setup for work.
I have a master/slave HA setup. Let's call them artemis001 and artemis002.
We have a VIP, called artemisvip, that points to artemis001.
I have a client app that connects with the URL
tcp://artemisvip:61616?ha=true
If I take down artemis001, I not
It seems that using "reconnectAttempts=-1" or using "reconnectAttempts=500"
both work. In other words, it looks like setting it at reconnectAttempts=6
(along with the other connection parameters I have) wasn't sufficient time
for failover and failback.
--
Sent from: http://activemq.2283324.n4.na
Oh, that's unfortunate. I used the "< raw >" tag using nabble.
Here is the artemis (2.9) client code snippet (hopefully this works):
public class SampleProducer {
public static void main(String[] args) throws JMSException,
InterruptedException {
String brokerUrl = "tcp://artemis:6161
In my setup, I have a one master and one slave. I have a sample app:
Steps
1. Start up the above app
2. Kill -9 artemis master
3. Bring artemis master back up
Once in a while, my app gets stuck looping on the following exception (After
step 3).
Any ideas?
--
Sent from: http://activemq.228
Thanks for the insight Justin.
Albert
--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
Thanks for the quick response Justin.
I've configured Artemis to use replication as the infrastructure for
shared-storage isn't... great.
So for my situation at work, the hypervisors tend to randomly die on us (and
thus taking the VMs with them). We have 3 zones/hypervisors.
I wanted a single ma
I just want to confirm that this is the expected behaviour. I have 1 master
with 3 slaves (the brokers are hosted on VMs that tend to randomly die). I'm
currently testing this on the latest source code from github.
Here's the scenario:
1) Start master
2) Start slave1
3) Start slave2
4) Kill maste