We were actually discussing this possibility yesterday, but wondered what would 
be the behavior of the broker in case the shared FS fails, and have two options 
there:
 - let the VM exposing the NFS fail and be restarted on another ESX host by the 
cluster
 - or set the VM as fault tolerant on the ESX so that it is readily available 
shall the primary host fail. 

But I'm especially wondering what would be the best practice in case of failure 
of the shared storage with KahaDB ? Just piles up in memory until KahaDB comes 
back ? Or plainly fail ?  Seems to be configurable via 
LeaseLockerIOExceptionHandler if I understood well. 

Yannick


-----Original Message-----
From: Matt Pavlovich <mattr...@gmail.com> 
Sent: Thursday, 9 June 2022 22:18
To: users@activemq.apache.org
Subject: Re: ActiveMQ Classic HA without SAN

A NFSv4 server running on a dedicated NAS (or other Linux server running on 
ESX) can provide the shared-storage option for HA data storage using KahaDB

Two (or more) ActiveMQ servers can mount the same NFSv4 file share and the 
first one to obtain the lock “wins” and becomes the primary (aka master) broker.

-Matt

> On Wed, Jun 8, 2022 at 9:23 AM Carbonneaux Yannick < 
> yannick.carbonne...@skyguide.ch> wrote:
> 
>> Hi,
>> 
>> We are planning the deployment of an ActiveMQ Classic cluster and I 
>> am looking for the best way to implement HA in a VMWare environment 
>> on RHEL OS without a SAN.
>> It seems I'm down to Master/Slave with either something like GFS2 (so 
>> a RHEL Cluster) or JDBC with MySQL (or any other db) to ensure that 
>> no messages are lost if we lose an ESX in our VMWare cluster.
>> 
>> Are there any recommendations on how to implement HA in this context?
>> Performance is not really problematic today... but might in a near future.
>> 
>> Thanks for your help,
>> 
>> ___
>> Yannick Carbonneaux
>> 
>> 

Reply via email to