Thanks. I'm pretty sure AMQ-5082 is what I'm seeing on 5.11.1. I'll see if I can get the cycles to set up a unit test to replicate the issue.
On Wed, Mar 4, 2015 at 5:52 AM, Tim Bain <tb...@alumni.duke.edu> wrote: > People reported similar high-level symptoms against 5.10.0 several months > back (you can search the archives on Nabble), and I don't recall any > discussion of anyone finding a solution. But JIRA is the authoritative > place to find out whether anyone has reported and/or fixed this issue (or > any other). > On Mar 3, 2015 8:23 AM, "James A. Robinson" <jim.robin...@gmail.com> wrote: > >> Hi folks, >> >> While testing out ActiveMQ I've been building clusters >> VirtualBox. I've been spinning up two 3-node Replicated >> LevelDB stores on my laptop. >> >> I've noticed that the clusters can sometimes get into a >> state where none of the nodes is the master. It appears >> to me as though it's an issue with talking to zookeeper. >> >> I'm assuming the issue is related to how few cpu cycles >> the clusters are getting as part of this environment, but >> the fact that the clusters don't ever recover makes me >> wonder if Replicated LevelDB is still a work in progress? >> >> I was testing it because I saw that MasterSlave pairs >> were deprecated in favor of one of the share everything >> solutions or Replicated LevelDB. >> >> Typically I'll see a message from this code: >> >> >> ./activemq-leveldb-store/src/main/scala/org/apache/activemq/leveldb/replicated/groups/ChangeListener.scala:102: >> ChangeListenerSupport.LOG.warn("listeners are taking too long >> to process the events") >> >> and then nothing. No more attempts to talk to the >> zookeeper cluster, no attempts to elect a new master. >> >> I haven't dug deeply into the issue yet, I wanted to >> ask you folks about the status of the code first. >> >> Jim >>