This is the first I'd heard of a replicated KahaDB implementation. I'd
heard of multi-KahaDB (mKahaDB), but that's multiple KahaDB databases on a
single broker (split by destination) rather than replicated. Maybe someone
else can provide more info...
On Oct 5, 2016 1:51 AM, "mlange" wrote:
> t
thanks for providing a good idea of what to expect; yet I think it's good to
be aware of this.
I did notice upon searching that there has been done some work to get Kahadb
replicating; is that completely abandoned or is it still something that
might get implemented (in a somewhat near future) I thi
Thank you. Unfortunately there's currently no LevelDB expert active on
this mailing list, so most LevelDB questions with any degree of complexity
go unanswered.
On Tue, Oct 4, 2016 at 10:42 PM, mlange wrote:
> Created a JIRA https://issues.apache.org/jira/browse/AMQ-6453 about this
> issue; ple
Created a JIRA https://issues.apache.org/jira/browse/AMQ-6453 about this
issue; please let me know if you need more information.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-ReplicatedLevelDB-corruption-tp4716831p4717515.html
Sent from the ActiveMQ - User mail
Gave the system a weekend time to see if it could recover automatically...
alas that's not the case.
However, when I stopped all brokers, changed one of the brokers with
'replicas="1"' (rather than 3); started it (kind of single mode) and started
the other brokers one by one so they could catch up
Concluded the test now; No good news though.
After sending the 100.000 messages (which got consumed by services that in
turn also produced new messages and so had a flow from one queue to another
and another etc... resulting in about 1.500.000 messages in the few hours
that this test ran) I restar
possibly of note is that I had the sync option on the default (quorum_mem).
I have moved the data directories for each broker as an ".org" (for
safekeeping) and will try and see if quorum_disk is the way to prevent this
situation from happening. It will take a little while though to get definite
r
Yes, the three brokers are on separate (virtual) machines that write to their
own disk.
It's hard to determine (at this point) whether the original master got
corrupted, as it (at least) got corrupted after starting said broker again
(or was already corrupted to begin with)
--
View this messag
Just to confirm: your three brokers are writing their LevelDB files to
independent, separate disk locations, giving you three separate sets of
LevelDB files. Right?
Is the original master's set of data files corrupted? Or is it just the
two slaves for whom this happened?
On Sep 23, 2016 5:44 AM