Hello,

when the consumer of a queue is unavailable for a while the number of
messages in the queue increases continuously. When the queue reaches a
critical unknown limit the broker is so busy to recover the messages from
the db that it blocks the producer and can no longer work properly. In the
concrete scenario the queue has a size of about 16000 messages and 5 gb. 
The broker needs about 6 minutes to start, because of a cleanup
(org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.doDeleteOldMessages())
but this is not critical.

After starting, the broker is running normally until a consumer connects.
Afterwards the broker prepares the messages for the consumers. (We use the
default store based cursor).
Due to time measuring we know that the execution of the SQL Statement 

in the method  takes between 170 and 200 seconds!
Stacktrace:


During the execution the producer is blocked. See the following stacktrace:


When the embedded broker runs for a while (30 - 60 minutes) the whole
application runs in an OutOfMemory error. 

The above sql statement  is executed consecutively several times. The id
increase in steps of 20. Except in the first execution it is set to -1. In
order to find the right offset. The maximum number of rows is set to 10000.
But the variable /maxReturned/ is set to 20. So actually only 20 datasets
are needed. The Fetch Size is set to 1 what should be insufficient, even
with an embedded jdbc driver. 

If I change these values to a maximum number of 20 rows and setting the
fetch Size to 20, that does not alter the execution time.

When the SQL Query is executed with squirrelSQL it takes depending on the
maximum number of rows during 2 and 30 seconds. So the execution time
depends on the maximum number of rows in contrast to the execution in
activemq.

Versions

ActiveMQ: 5.4.2
Derby: 10.8.1.2
Spring: 3.0.6
Java: 1.6.0.26

Also tested with ActiveMQ 5.5.1 and Derby 10.8.2.2 (Same behaviour)

Configuration

We use an embedded broker configured through spring

<amq:broker>
....
                <amq:persistenceAdapter>
                         <amq:jdbcPersistenceAdapter 
                                useDatabaseLock="false" cleanupPeriod="0"
                                dataSource="#jms-derby-ds"/>
                </amq:persistenceAdapter>
....
</amq:broker>


Has anybody an idea why the execution is actually so slow?
Or has anybody experiences with derby as the persistence adapter and large
queues?





--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Derby-Persistence-Adapter-is-unusable-with-a-large-Queue-tp4186452p4186452.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to