Re: ActiveMQ deployment

2015-12-01 Thread Rallavagu
Thanks Raffi. This looks great. I will further read upon this. On 12/1/15 6:26 PM, Basmajian, Raffi wrote: Hi Rallavagu, When using "failover:" from client, if the transport connector has updateClusterClients="true", the clilent monitors changes in the broker cluster, allowing the client to m

RE: ActiveMQ deployment

2015-12-01 Thread Basmajian, Raffi
Hi Rallavagu, When using "failover:" from client, if the transport connector has updateClusterClients="true", the clilent monitors changes in the broker cluster, allowing the client to maintain a list of active brokers to use for connection failover. We've tested this feature and were very impr

RE: ActiveMQ deployment

2015-12-01 Thread Basmajian, Raffi
Jim, "Pure" master slave was deprecated, not "Shared storage" master slave. I can't comment on LevelDB. ...deprecated: http://activemq.apache.org/pure-master-slave.html ..what we use with ActiveMQ 5.11 and NFSv4 storage; it's solid and works well: http://activemq.apache.org/shared-file-system-m

Re: ActiveMQ deployment

2015-12-01 Thread Rallavagu
Raffi, Thanks. This is interesting. What do you mean by "If connection fails, assuming transport connector is configured to update client with cluster changes" as the client is configured with only "failover:(tcp://eventbus:61616)"? On 12/1/15 4:23 PM, Basmajian, Raffi wrote: That's exactl

Re: ActiveMQ deployment

2015-12-01 Thread James A. Robinson
So when I was building my system I had wanted to use M/S, but the documentation had indicated the old M/S was deprecated in favor of the newer replicated LevelDB store. There are some stability issues with replicated LevelDB (w/ the code handling the zookeeper connection). Do you use an older conf

RE: ActiveMQ deployment

2015-12-01 Thread Basmajian, Raffi
That's exactly the configuration we're building; M/S pairs with NoB, connected via complete graph. All clients connect using wide-IP "failover:(tcp://eventbus:61616)", that's it. We did this for two reasons: 1) to avoid messy failover configuration on the client, 2) to avoid client-reconfig when

Re: JDBC 300K messages stuck in queue, consumers receives messages when manually browsed the queue

2015-12-01 Thread Tim Bain
If you control the producers and are willing to lose the unconsumed messages, you can have the producers set the JMSExpiration header so that messages expire if they're not consumed after a certain amount of time. But that won't help you get out of the current problem. For right now, if you can f

Re: ActiveMQ deployment

2015-12-01 Thread Rallavagu
Now, I am getting a clearer picture about the options. Essentially, NOB provides load balancing while Master/Slave offers pure failover. In case I go with combination where a Master/Slave cluster is configured with NOB with other Master/Slave cluster how would the client failover configuration

Re: JDBC 300K messages stuck in queue, consumers receives messages when manually browsed the queue

2015-12-01 Thread Takawale, Pankaj
Yes, there are messages in the queue that do not match any of the consumer selectors. I wonder why they landed up in the queue. I've also noticed the high CPU usage that you mentioned. I've one VirtualTopic, and around 50 selector-aware queues on it. System processes around 40K different JMSXGrou

RE: ActiveMQ deployment

2015-12-01 Thread Basmajian, Raffi
NoB forwards messages based on consumer demand, not for achieving failover. You can get failover on the client using standalone brokers, just use failover:() protocol from client. Master/Slave is true failover. -Original Message- From: Rallavagu [mailto:rallav...@gmail.com] Sent: Tuesday

Re: ActiveMQ deployment

2015-12-01 Thread Rallavagu
Thanks again Johan. As the failover is configured at the client end how would the configuration for combined deployment look like? I was thinking on the lines of NOB because the messages are forwarded to other broker(s) thus achieving failover capabilities in case the original broker is failed

Re: ActiveMQ deployment

2015-12-01 Thread Johan Edstrom
You want to combine them, the NOB is for communication but JMS is still store and forward, i.e if a machine dies, you can have multiple paths, what was in the persistence store of said machine is still “dead” until the machine is revived, that’s where the Master / Slave(s) come in. They’ll jump

Re: AWS disk hosting ActiveMQ is full, UI purge/delete doesn't work

2015-12-01 Thread artnaseef
Only with the broker shutdown, of course. -- View this message in context: http://activemq.2283324.n4.nabble.com/AWS-disk-hosting-ActiveMQ-is-full-UI-purge-delete-doesn-t-work-tp4703991p4704462.html Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: AWS disk hosting ActiveMQ is full, UI purge/delete doesn't work

2015-12-01 Thread artnaseef
If you don't need the persisted messages, then go ahead and delete the entire contents of the kahadb folder, not just the *.log files. -- View this message in context: http://activemq.2283324.n4.nabble.com/AWS-disk-hosting-ActiveMQ-is-full-UI-purge-delete-doesn-t-work-tp4703991p4704461.html Sen

Re: AWS disk hosting ActiveMQ is full, UI purge/delete doesn't work

2015-12-01 Thread YaoPau
Is it okay if I just delete the kahadb (db-1719.log, etc) log files using the AWS terminal using "rm *.log" in the kahadb directory? I don't need any of the past data. -- View this message in context: http://activemq.2283324.n4.nabble.com/AWS-disk-hosting-ActiveMQ-is-full-UI-purge-delete-doesn