A few thousand 1MB to 4MB messages will run you into the heap limit in a
hurry, even with your 8GB of heap.  (I assume that's an 8GB JVM on a host
with more than 8GB RAM, not a smaller JVM on an 8GB host.)

Do you really need to dump gigs of messages on the broker when this
happens?  (A few thousand is hours and hours of content; that's really the
best way to handle your issue?)  If you do, Tim's suggestion of slowing
down your producer could help, or you can enable Producer Flow Control (you
should probably do this anyway, given this scenario).

Alternatively, make these persistent messages and configure a KahaDB
instance to hold them, which won't mind when you throw lots of data at it
all at once.  Or stay non-persistent and configure to overflow to temp.

Tim
On Apr 14, 2016 3:33 PM, "Timothy Bish" <tabish...@gmail.com> wrote:

On 04/14/2016 04:39 PM, aarontc wrote:

> I'm looking for some pointers in diagnosing an issue we're seeing. I'll try
> to describe:
>
> ActiveMQ version: 5.12.1
> OS: Ubuntu Linux 12.04
> STOMP clients using Ruby 2.0.0 stomp gem v 1.1.10
>
> We have two queues on a host with 8GiB of RAM and 20GiB of disk space.
> Under
> steady-state conditions...
>
> Queue A receives 10 to 500,000 byte messages at about 10 per second. There
> are 10 consumers on this queue, and messages are essentially consumed as
> fast as they can be produced.
>
> Queue B receives 850,000 to 4,000,000 byte messages about 5 per minute.
> There is 1 consumer on this queue, and messages are essentially consumed
> within a few seconds of production.
>
> Now, when a particular issue arises elsewhere in our system, we need to
> backfill Queue B with a few thousand messages. We load those in much, much
> faster than the Queue B consumer can process them. Everything is fine until
> 1500 or so messages are enqueued. At this point, the consumers on Queue A
> STOP receiving messages. Queue A producers are still delivering 10 per
> second, and the Queue B consumer is consuming a message every few seconds.
>
> How do we troubleshoot this? I've tried loading Queue B more slowly - the
> problem still occurs when ActiveMQ hits some memory threshold. The only way
> to resolve the problem, apparently, is to delete Queue B. (An attempt to
> "Purge" Queue B from the web interface will result in an OutOfMemory error
> being returned.)
>
> Thanks!
> -Aaron
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Consumers-inexplicably-blocked-while-producers-run-fine-Help-tp4710793.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
> If you stop producing to Queue B and let the consumer drain it completely
do the consumers on Queue A start to get messages again?

-- 
Tim Bish
twitter: @tabish121
blog: http://timbish.blogspot.com/

Reply via email to