Sorry for the delay, I finally found some time to try it out. I ran test with
ActiveMQ 5.15.3 broker and that seems to resolve the OutOfMemoryError (OOME)
issue when a whole bunch of messages are put on the non-persistent queue
that exists in ActiveMQ 5.15.2 but not under ActiveMQ 5.14.5.
Regards,
I checked with IT guys and the disk I/O is not the issue, it can handle the
data rate just fine.
I did not have time in finding a way to slow the producers.
Building from 5.15.x/3 branch from source is not an option at this time. I
also believe [AMQ-6815 KahaDB checkpoint needs to fail fast in th
We run the broker with max heap of 4G and initial of 1G (-Xmx4G -Xms1G).
We use non-persistent messages on these particular queues (3 of them in this
test).
The number of messages sent to the broker in my last "flood gate" test was
around 40,000 (40k) in 5 minutes or about 8K msgs/min. After this f
> Do you get the "thread shut down" message on the first message enqueued, or
> only after a certain number of messages are sent?
I believe, after a large number of of messages have been enqueued, is when
it enters the OutOfMemoryError state. In another test run, I did not see the
"Async Writer T
I have tried out a few things to reduce the variables, one step at a time.
1. I upgrade the ActiveMQ client libraries to 5.15.2 to match the server.
This results in same issue (OutOfMemoryError on ActiveMQ 5.15.2 server).
2. I change the memoryUsage percentOfJvmHeap from 70% to 50%, restarted the
We recently upgraded to ActiveMQ server 5.15.2 from 5.12.2 and when putting
some messages onto a Queue, we get the following error
ERROR | Caught an Exception adding a message: ActiveMQObjectMessage
{commandId = 2705, ...} first to FilePendingMessageCursor |
org.apache.acivemq.broker.region.cursor