Petter,

I'm not aware of a way to limit queue depth by number of messages that will
invoke producer flow control, which is the behavior I assume you want to
result when you hit the limit.  We actually just disabled per-destination
memory limits in our broker because of the difficulty of guaranteeing that
we'd always be able to fit a full prefetch buffer worth of messages into N
MB of space; without that, we risked flow controlling producers when
consumers were slow (or entirely unresponsive) before the broker built up
enough messages to consider the consumer slow and abort it via the
AbortSlowConsumerStrategy.  So there's probably an enhancement request that
should get submitted to allow per-destination limits to be set in terms of
number of messages.  (I just searched for an existing enhancement request
to cover this and didn't find one.)

If you're looking to discard messages when you hit the limit rather than
flow control producers, you can use one of the *PendingMessageLimitStrategy
implementations to discard messages without flow controlling producers.
But I'd guess this probably isn't what you're looking for.

Tim

On Thu, Jan 8, 2015 at 12:35 AM, Petter Nordlander <
petter.nordlan...@enfo.se> wrote:

> Hi,
>
> Is there a way to limit the queue depth of an ActiveMQ queue in number of
> messages?
>
> I know there are ”per destination policies” that can detect queue usage in
> terms of memory used. However, the number of messages may indicate other
> things, like how many .log files (kahadb) that can be tied up by a certain
> queue where the consumer is infrequent (or just unstable).
>
> BR Petter
>

Reply via email to