The broker reads its sequence id from the store when it has the lock,
so the duplicate detection will ensure a write failure on the next
send by the old master.  The worst that can happen is duplicate
dispatch, messages that were in flight in the master, will be resent
by the slave.
However to mitigate this, using a small lockKeepAlivePeriod is
required. This ensure that the old master quickly detects that it has
lost its lock.

Do you know why there long gc pauses? Maybe you need to cache less
messages to reduce the gc overhead.

On 16 April 2014 19:18, oliverd <oliver.deck...@hotmail.com> wrote:
> when using JDBCPersistenceAdapter with LeaseDatabaseLocker the master node
> needs to extend the lease for the next interval
>
> in case, there is a long running GC (longer than the lease extend time) then
> the slave will take over and the old master will detect after the GC
> completes that it has to step back (and stop)
>
> during testing it takes some time until the old master really stops (can be
> up to 20 secs during tests, in case there are many client connections that
> stress the node)
> during that time clients connect to both masters until the old master has
> stopped transports
>
> I have seen clients getting SQL exceptions due to duplicate keys on insert
> to the MSGS table during that time and so I was wondering what is the risk
> of getting potential inconsistencies in client state (messages pretend to
> have failed) or even the the message store
>
> is there any chance that the message store can get inconsistent in such a
> situation?
> as longer GCs cannot be prevented under all circumstances a message store
> inconsistency as a follow up issue would add a certain risk to the
> LeaseDatabaseLocker option
>
>
>
>
> --
> View this message in context: 
> http://activemq.2283324.n4.nabble.com/LeaseDatabaseLocker-and-parallel-masters-tp4680368.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.



-- 
http://redhat.com
http://blog.garytully.com

Reply via email to