On 04/14/2016 05:33 PM, aarontc wrote:
Hi Tim,
Yes - the consumers on Queue A will eventually start receiving messages once
Queue B has had a bunch of messages removed.
FWIW, I just verified this behavior is still present in 5.13.2.
Thanks,
-Aaron
--
View this message in context:
http://ac
Hi Tim,
Yes - the consumers on Queue A will eventually start receiving messages once
Queue B has had a bunch of messages removed.
FWIW, I just verified this behavior is still present in 5.13.2.
Thanks,
-Aaron
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Consumers-in
A few thousand 1MB to 4MB messages will run you into the heap limit in a
hurry, even with your 8GB of heap. (I assume that's an 8GB JVM on a host
with more than 8GB RAM, not a smaller JVM on an 8GB host.)
Do you really need to dump gigs of messages on the broker when this
happens? (A few thousan
On 04/14/2016 04:39 PM, aarontc wrote:
I'm looking for some pointers in diagnosing an issue we're seeing. I'll try
to describe:
ActiveMQ version: 5.12.1
OS: Ubuntu Linux 12.04
STOMP clients using Ruby 2.0.0 stomp gem v 1.1.10
We have two queues on a host with 8GiB of RAM and 20GiB of disk space
I'm looking for some pointers in diagnosing an issue we're seeing. I'll try
to describe:
ActiveMQ version: 5.12.1
OS: Ubuntu Linux 12.04
STOMP clients using Ruby 2.0.0 stomp gem v 1.1.10
We have two queues on a host with 8GiB of RAM and 20GiB of disk space. Under
steady-state conditions...
Queue
Thanks again for the quick replies gentlemen.
I actually got everything up and running again - this time by restarting
both the activeMQ service as well as the apache/tomcat service. Yesterday I
did the same but it did not work. Yesterday I actually had to delete the DB
and log files from the DB
>From the sound of it, no one queue is overloading the broker; rather, you
have lots of queues that each have some messages, and the aggregate load is
the problem. So don't look for the problem child, and instead focus on
protecting the entire broker.
First, you should enable Producer Flow Contro
Hi,
Are there any queues where there is no movement vs. ones where there is a
steady shift in enqueued/dequeued counts? Or any topic where there are one
or more consumers and any pending messages to be dequeued?
I wouldn't typically expect a memory limit on a queue to cause a consumer
to not be
Do you have any (and I mean ANY) messages that aren't consumed for long
periods of time, including any messages sitting in the DLQ? As Jim
referenced, even a single old message can keep all data files alive,
running you up against the storeUsage limit.
Tim
On Apr 14, 2016 7:16 AM, "James A. Robin
Thanks for the reply Jim.
We do have the webconsole enabled and I am looking at the queues page.
These are all fairly active queues - I.E there are currently about 20 queues
with between 20-80 messages in them.
The problem is and what I've read is that the messages are no longer
dequeing. My l
Hi,
Looking for a queue or topic that had a large number of unconsumed messages
would probably be a good star
t.
If your
server
has the webconsole activated you can look at the queues and topics,
for example if your server had a webconsole on port 8161:
http://:8161//admin/queu
I've recently been left (with no working knowledge) an activeMQ windows
server installation.
The problem I'm having (which I've read as much as I can about) is that the
memory percent used keeps climbing and eventually I'll have to restart the
activeMQ service and delete the db.data files and a
You might see if you can get debugging output similar to what KahaDB offers:
http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html
I had a similar situation and the KahaDB debugging showed that the problem
was
due to un-acked persistent messages sprinkled throughout the old
Hi,
I have a fairly simple production installation running consisting of
ActiveMQ 5.11.1 using levelDB. Everything configuration is kept like out of
the box, only storeUsage limit is set to 20GB.
Since a while I see storage leak, every day the reported store percentage
used is raising by 4%, whil
I have a setup for two brokers. The configuration is as follows:
The other broker is the same configuration, but the transport connector
listens on port 61616. I successfully see Text
Hi,
It's been quite some time now and I just wanted to follow up for
archiving purposes. Since we updated everything to 5.13, we don't see
the issue any more. I am convinced it had been caused by the 5.7
client library.
Best regards,
Martin
On Sat, Jan 9, 2016 at 6:24 AM, Tim Bain wrote:
> Grea
16 matches
Mail list logo