Setting broker to fail producer in case its out of resources, instead of
making producer wait forever
<.
Your views on this?
Broker machine is very high config vm (56 G ARM out of which 70% for broker
and has 8 cores )
--
View this message in context:
http://activemq.22
Yes all the messages are getting published to the same broker.
Producer window size was introduced to avoid message loss, previously we
used to lose ~10-20% of messages as we use async send. Usually we get huge
number of messages in short time (message burst).
--
View this message in context:
No, the broker logs are clear and I dont see any warnings in broker logs.
*I am worried as, even broker resources are within limit this waiting thread
never returns causing this consumer to die*.
My workflow is
Queue -> Camel route jms consumer (uses connectionfactory without
producer window s
I was able to capture a thread dump on the jvm on which consumers are
vanishing.
The threads are blocked on producer window, waitForSpace().
"Camel (CamelContext) thread #573 - JmsConsumer[RequestQueue]" daemon prio=6
tid=0x11ded000 nid=0x156c in Object.wait() [0x5b69d000]
java
I am able to reproduce this issue by restarting the broker, when camel
consumers are sending those millions of messages on the queue.
I am planning to set recoveryInterval on the camel JmsConfiguration, please
post your views on this.
--
View this message in context:
http://activemq.2283324.n4
We are using ActiveMQ 5.10 broker with following configuration
JVM heap 48G out of which 70% allocated for memoryUsage
70G storage and 1G for temp
we are using Apache camel 2.10.2 route to consume messages from queue. After
receiving message, we create ~40k smaller messages/ received message and
We are experiencing same issue, with activemq 5.10 broker and consumers are
created through camel routes.
Are you able to conclude on your problem, please share the if you have fixed
it.
Thanks,
Dhananjay
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-consumer