Re: ActiveMQ duplex network connector dead lock [5.13.1, 5.11.1]

2016-03-14 Thread yang.yang.zz
I'm still investigating the cause of the dead lock. Here's a piece of thread dump I found from the producer broker. * Note producer broker configures the NetworkConnector.. "ActiveMQ Transport: tcp://lod-uimlongda/10.130.156.161:61616@45650" #139 prio=5 os_prio=0 tid=0x7fddc0007800 nid=0x1a1a

Re: ActiveMQ duplex network connector dead lock [5.13.1, 5.11.1]

2016-03-14 Thread yang.yang.zz
Hi Art and Tim Thanks very much for your replies. It's a good point you mentioned this design could be an anti-pattern. Just to give more background for our use case. The main purpose of our product is to collect data from LAN or WAN network devices. And the collectors (producers) will push colle

Re: ActiveMQ duplex network connector dead lock [5.13.1, 5.11.1]

2016-03-07 Thread yang.yang.zz
Hi Tim The main reason we put a broker on the producer is we want to leverage the broker's TempStore feature. Which is if the Consumer is offline, all the producer's broker can temporarily hold the produced data, and once the consumer is back online, it will catch up. This is the design since we

Re: ActiveMQ duplex network connector dead lock [5.13.1, 5.11.1]

2016-03-06 Thread yang.yang.zz
Thanks for the response Tim! I would not mind to go with more complicated config as long as it works. Just my experience on AMQ config is very basic. I've seen some examples of using two non-duplex network connectors for 1-to-1 brokers. But I didn't find out similar examples for 1-to-many brokers.

ActiveMQ duplex network connector dead lock [5.13.1, 5.11.1]

2016-03-06 Thread yang.yang.zz
Hi: We ran into this dead lock since upgrade to 5.11.1. In short, this dead lock happens on network connector when "duplex" is set to "true". For details of replicating this defect, see my earlier post here I also fo

Re: 5.13.1 message blocked

2016-03-02 Thread yang.yang.zz
I downloaded the source code and tracked the code a little. I found the issue could be here: DemandForwardingBridgeSupport.java (line: 774) // in a cyclic network there can be multiple bridges per broker that can propagate // a network subscription so there is a need to s

Re: 5.13.1 message blocked

2016-03-02 Thread yang.yang.zz
*## Update * Unfortunately, even with NIO, I found the same blocking issue after adding more data traffic... "ActiveMQ NIO Worker 64" #842 prio=9 os_prio=0 tid=0x7f1680010800 nid=0x3381 waiting for monitor entry [0x7f16d06d1000] java.lang.Thread.State: BLOCKED (on object monitor)

Re: 5.13.1 message blocked

2016-03-02 Thread yang.yang.zz
*## Update* Changing the transport protocol at consumer side seem to solve the issue. *broker0* nio*://0.0.0.0:61616"/> It looks like a bug with *tcp* protocol. -- View this message in context: http://activemq.2283324.n4.nabble.com/5-13-1-message-blocked-tp470

Re: 5.13.1 message blocked

2016-03-01 Thread yang.yang.zz
The question is "what is 138.42.247.41" is waiting for? Even the response is never coming back, why it's waiting forever and block other transport threads. Is there a way to timeout this forever wait? -- View this message in context: http://activemq.2283324.n4.nabble.com/5-13-1-message-blocked-

Re: 5.13.1 message blocked

2016-03-01 Thread yang.yang.zz
More findings in the consumer broker0. Why some connections a blocked. Here just give an example of a good connection for comparison

Re: 5.13.1 message blocked

2016-03-01 Thread yang.yang.zz
Thanks Tim for responding.. I just run the same test again and happen to have some connected brokers and disconnected brokers after the test. By looking at the thread comparison, I see the difference between connected broker & disconnected (broken) brokers is

Re: 5.13.1 message blocked

2016-03-01 Thread yang.yang.zz
Hi Tim I collected 3 broker logs during the network blocking events. Which are respectively: consumer broker log (broker0), and a good producer broker (still connected after the network resumes) and a bad producer broker (disconnected after network resumes) consumer_broker.txt

Configuration Questions

2016-02-29 Thread yang.yang.zz
Hi: I have several configuration questions based on the following topology Here's our *topology*: Consumer0 | broker0 /| \ / (network) \ / |

5.13.1 message blocked

2016-02-29 Thread yang.yang.zz
Hi: I'm running into an massive message blocking with ActiveMQ 5.13.1. This issue is found during our reliance testing. This testing is set due to a similar issue found in the field, where our customer is using 5.11.1 which have encountered similar disconnection issues. Here's our topology:

Re: Message stuck after failover [5.11.1, 5.13.0]

2016-01-11 Thread yang.yang.zz
Here I found an open Defect for this. https://issues.apache.org/jira/browse/AMQ-5531 -- View this message in context: http://activemq.2283324.n4.nabble.com/Message-stuck-after-failover-5-11-1-5-13-0-tp4705730p4705738.html Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Message stuck after failover [5.11.1, 5.13.0]

2016-01-11 Thread yang.yang.zz
I really hope this is a misconfiguration. I have started working on a mechanism to restart the broker1 after a failover... However, since the failover happens on the broker1, so it seems I couldn't catch the failover event on node A. Therefore it makes detecting when it needs to restart broker1 ve

Re: Message stuck after failover [5.11.1, 5.13.0]

2016-01-11 Thread yang.yang.zz
Thanks for replay Tim! I've posted the two config files above. For your questions: /Are broker2a and broker2b configured as a master/slave pair with a shared persistence store, or simply as two standalone brokers? If there is a shared persistence store, which type is it? / There's no special

Re: Message stuck after failover [5.11.1, 5.13.0]

2016-01-11 Thread yang.yang.zz
*Configuration for node B* http://www.springframework.org/schema/beans"; xmlns:amq="http://activemq.apache.org/schema/core"; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spr

Re: Message stuck after failover [5.11.1, 5.13.0]

2016-01-11 Thread yang.yang.zz
Configuration for node *A* http://www.springframework.org/schema/beans"; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://activemq.apache.org/schema/cor

Message stuck after failover [5.11.1, 5.13.0]

2016-01-10 Thread yang.yang.zz
Hi: I run into an issue that message will get stuck after failover. The workaround seems to be restarting the consumer broker. Here's my network: A <--- broker1 -- duplex network --- broker*2a* ---> B1 then failover happens, broker2a and B1 are killed (process killed*) and the backup nod

Re: Message stuck after failover [5.11.1, 5.13.0]

2016-01-10 Thread yang.yang.zz
I was trying to imagine what's happening behind the scenes *before failover* B1 (up) / A --- failover ? \ B2 (down) *after failover* B1 (down) / A --- fail

Re: What is duplicate delivery poison ack?

2015-12-06 Thread yang.yang.zz
I have the same problem and my version is 5.11.1. Does AMQ-5795 solve this problem? -- View this message in context: http://activemq.2283324.n4.nabble.com/What-is-duplicate-delivery-poison-ack-tp4688299p4704648.html Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: [ActiveMQ 5.11.1] tons of "suppressing duplicate delivery" WARN messages

2015-08-26 Thread yang.yang.zz
Anyone has seen this issue before? What does the "poison ack" for? Specifically, If the producer keeps sending duplicates, what's the difference if the consumer send this "poison ack" or not. Is there a way to disable this warning log? It's flooded everywhere in our log which made it very difficu

[ActiveMQ 5.11.1] tons of "suppressing duplicate delivery" WARN messages

2015-08-22 Thread yang.yang.zz
Hi We recently got tons of WARN messages from a consumer broker. Once it occurs, it seems to be lasted forever. It came with a total throughput drop and we observed messages got piled up on producer broker side. We're using ActiveMQ 5.11.1. What's the cause of this issue? Is there anything to sol

Re: org.apache.activemq.usage.MemoryUsage consumes 95% of the memory?

2015-08-21 Thread yang.yang.zz
Just copied the broker configuration file. It's just a classic configuration and the only change we made is 1. The broker has persistent enabled because we have a few durable messages. 2. But for most of the messages, we don't want them go to the Temp Store, so we used the memoryCursor. 3. To pr

Re: org.apache.activemq.usage.MemoryUsage consumes 95% of the memory?

2015-08-21 Thread yang.yang.zz
We haven't got a yokuit setup yet. Only using the JMC and MAT to found this. We'll try if possible. Just from the 5.11.1 source code, I see the callbacks list is being added from this method: Usage.notifyCallbackWhenNotFull() Is there a possible guess how it can be called 200 million times with

Re: org.apache.activemq.usage.MemoryUsage consumes 95% of the memory?

2015-08-21 Thread yang.yang.zz
Hi Tim, We are using non-durable messaging. Do you think the fix can cover this case? -- View this message in context: http://activemq.2283324.n4.nabble.com/org-apache-activemq-usage-MemoryUsage-consumes-95-of-the-memory-tp4701250p4701282.html Sent from the ActiveMQ - User mailing list archiv

org.apache.activemq.usage.MemoryUsage consumes 95% of the memory?

2015-08-20 Thread yang.yang.zz
Hi: When I tested our product wtih ActiveMQ 5.11.1, we observed a high memory usage spike in a ActiveMQ broker process. This broker has 9G memory configured. It's using memoryCursor to cache messages. But for each Queue it has a memory limit for 200M. Then we observed a high memory usage spike

broker network disconnected 30mins after producer's tempUsage full

2015-07-27 Thread yang.yang.zz
Hi: I have a very simple broker network. Broker-1 and Broker-2. There's is a Producer application and Consumer application. The network looks like: Producer <--> Broker-1<- duplex network connector -> Broker-2 <---> Consumer The problem is, if Consumer is slow or down. The produced d