I have been able to successfully have large number of persistent messages spool over to the filesystem exactly as intended by using the cursorMemoryHighWaterMark for messages that have been enqueued to the queue.
The issue I am running into, it happens only a few times per day in our system, is that a large transaction is created that tries to enqueue a couple hundred thousand messages and during the transaction none of the rules in place seem to cause the messages to get spooled over to disk. If the number of messages is large enough the broker can still run out of memory. For this scenario I am not too worried about speed as much as I am about keeping the broker alive to receive and store the messages. I am using the file based cursor as I had noticed during some larger transactions that was causing the queue to hit the cursorMemoryHighWaterMark messages that had been committed to the queue were getting blocked behind the larger transaction which does not seem to occur with the file based cursor. Any thoughts on how to handle this scenario would be greatly appreciated. The broker is started with a gig of heap space. -Xmx1G <destinationPolicy> <policyMap> <policyEntries> <policyEntry topic=">" producerFlowControl="true"> <pendingMessageLimitStrategy> <constantPendingMessageLimitStrategy limit="1000" /> </pendingMessageLimitStrategy> </policyEntry> <policyEntry queue=">" producerFlowControl="false" cursorMemoryHighWaterMark="10"> <pendingQueuePolicy><fileQueueCursor/></pendingQueuePolicy> </policyEntry> </policyEntries> </policyMap> </destinationPolicy> <systemUsage> <systemUsage> <memoryUsage> <memoryUsage limit="64 mb" /> </memoryUsage> <storeUsage> <storeUsage limit="30 gb" /> </storeUsage> <tempUsage> <tempUsage limit="2 gb" /> </tempUsage> </systemUsage> </systemUsage> -- Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html