Hi,

we have the following scenario:

- ActiveMQ 5.2.0 is run in embedded fashion inside JBoss using the ActiveMQ
RA
- Messaging is configured to be persistent using Kaha persistence
- A few SessionBeans send messages asynchronously to a queue using
Connections obtained from the RA
- Every once in a while a TimerBean (well, actually it is a MessageBean
driven by JBoss's Quartz integration, but that should not matter - in any
case the consumption happens inside a JTA transaction) comes alive, fetches
as many messages from the queue as it can get (using synchronous
receive-calls), and processes them. The rationale behind this approach is
that the messages in question contain statistical data which can be
aggregated before sending it off to a database. The aggregation dramatically
reduces the load on the database. 
- The messages are not consumed in any other way, in particular they are not
delivered to an MDB

With this scenario we see two strange behaviours which may or may not be
related:

1. If there are many messages in the queue, one would expect any given
invocation of the consumer bean to receive all messages there are. However,
this is not the case - it only gets a limited number of messages, usually
between 800 and 1200 at a time. The actual number varies but is almost
always a multiple of 100. Increasing the timeout on the receive operation
doesn't change anything.

2. Since the switch to 5.2.0 we frequently ran into limits of the
SystemMemory, seemingly because cursors are kept open, even though there is
no persistent consumer. We turned off producer flow control and configured
the queue to use a FileQueueCursor, but we still see the MemoryPercentUsage
go though the roof (>>100%!) with large numbers of messages in the queue.

We were able to address issue 1 by reducing the prefetch count to 10, but I
still think this may point to a deeper issue. With but a superficial look at
the code, my guess at what happens is something along the lines of:
- Connections are pooled and re-used. Which connection is used to consume
messages is essentially random or at least up to the JCA container's pool
implementation.
- Once a consumption cycle hits the broker, a cursor is opened. This cursor
is not closed upon returning the connection to the RA pool. I can see that
the ActiveMQMessageConsumer sends a "remove" command upon close(), which we
do call, but maybe that's not enough. Furthermore I can see that
ActiveMQConnection.cleanup() does some work in that direction, but strangely
it only causes dispose() to be called on the ActiveMQMessageConsumer (via
ActiveMQSession.dispose()), but this method doesn't send anything to the
broker.
- New messages get pre-fetch delivered to the various connections/consumers
in the pool where they pile up until a consumption run "accedentially"
receives them. 
- Consequently, the open cursors consume memory on the broker, which spells
doom for message producers unless producerFlowControl is turned off (we
learned the hard way...).

On the other hand, there are a few observations which don't quite fit this
explanation, so it may be way off the mark. In particular:
- It didn't help to use a separate connection pool just for message
consumption
- The ConsumerCount info in JMX for the queue drops to zero when no
consumption is active
- Every once in a while (but rarely) the consumer actually receives all
messages there are

The described behaviour is rather annoying albeit no longer fatal to the
application since we turned off producer flow control and use the file queue
cursor. If you need any other information to look into the issue, please let
me know.

Thanks
Joerg
-- 
View this message in context: 
http://www.nabble.com/Strange-delivery-behaviour-with-5.2.0-ResourceAdapter-tp24737329p24737329.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to