If I increase the JVM max heap size (4GB), the behavior does not change. In my point of view, the configured memoryLimit (500 MB) works as expected (heapdump shows same max. size for the TextMessage content, i.e. 55002 byte[] instances containing 539 MB total).
However, trying to browse a queue shows no content, even if there is enough heap memory available. As far as i understand the sourcecode, this also due to the configured memoryLimit, because - i hope this is the answer you expect - the calculation for available causes hasSpace = false. I found this here: AbstractPendingMessageCursor { public boolean hasSpace() { return systemUsage != null ? (!systemUsage.getMemoryUsage().isFull(memoryUsageHighWaterMark)) : true; } public boolean isFull() { return systemUsage != null ? systemUsage.getMemoryUsage().isFull() : false; } } #hasSpace is in this case called during a click on a queue in the Webconsole; see the 2 stacks during this workflow: Daemon Thread [Queue:aaa114] (Suspended (breakpoint at line 107 in QueueStorePrefetch)) owns: QueueStorePrefetch (id=6036) owns: StoreQueueCursor (id=6037) owns: Object (id=6038) QueueStorePrefetch.doFillBatch() line: 107 QueueStorePrefetch(AbstractStoreCursor).fillBatch() line: 381 QueueStorePrefetch(AbstractStoreCursor).reset() line: 142 StoreQueueCursor.reset() line: 159 Queue.doPageInForDispatch(boolean, boolean) line: 1897 Queue.pageInMessages(boolean) line: 2119 Queue.iterate() line: 1596 DedicatedTaskRunner.runTask() line: 112 DedicatedTaskRunner$1.run() line: 42 Daemon Thread [ActiveMQ VMTransport: vm://localhost#1] (Suspended (breakpoint at line 107 in QueueStorePrefetch)) owns: QueueStorePrefetch (id=5974) owns: StoreQueueCursor (id=5975) owns: Object (id=5976) owns: Object (id=5977) QueueStorePrefetch.doFillBatch() line: 107 QueueStorePrefetch(AbstractStoreCursor).fillBatch() line: 381 QueueStorePrefetch(AbstractStoreCursor).reset() line: 142 StoreQueueCursor.reset() line: 159 Queue.doPageInForDispatch(boolean, boolean) line: 1897 Queue.pageInMessages(boolean) line: 2119 Queue.iterate() line: 1596 Queue.wakeup() line: 1822 Queue.addSubscription(ConnectionContext, Subscription) line: 491 ManagedQueueRegion(AbstractRegion).addConsumer(ConnectionContext, ConsumerInfo) line: 399 ManagedRegionBroker(RegionBroker).addConsumer(ConnectionContext, ConsumerInfo) line: 427 ManagedRegionBroker.addConsumer(ConnectionContext, ConsumerInfo) line: 244 AdvisoryBroker(BrokerFilter).addConsumer(ConnectionContext, ConsumerInfo) line: 102 AdvisoryBroker.addConsumer(ConnectionContext, ConsumerInfo) line: 104 CompositeDestinationBroker(BrokerFilter).addConsumer(ConnectionContext, ConsumerInfo) line: 102 TransactionBroker(BrokerFilter).addConsumer(ConnectionContext, ConsumerInfo) line: 102 StatisticsBroker(BrokerFilter).addConsumer(ConnectionContext, ConsumerInfo) line: 102 BrokerService$5(MutableBrokerFilter).addConsumer(ConnectionContext, ConsumerInfo) line: 107 TransportConnection.processAddConsumer(ConsumerInfo) line: 663 ConsumerInfo.visit(CommandVisitor) line: 348 TransportConnection.service(Command) line: 334 TransportConnection$1.onCommand(Object) line: 188 ResponseCorrelator.onCommand(Object) line: 116 MutexTransport.onCommand(Object) line: 50 VMTransport.iterate() line: 248 DedicatedTaskRunner.runTask() line: 112 DedicatedTaskRunner$1.run() line: 42 Setting queueBrowsePrefetch="1" and queuePrefetch="1" in the PolicyEntry for queue=">" also has no effect. Am 08.01.16 um 16:32 schrieb Tim Bain: > If you increase your JVM size (4GB, 8GB, etc., the biggest your OS and > hardware will support), does the behavior change? Does it truly take all > available memory, or just all the memory that you've made available to it > (which isn't tiny but really isn't all that big)? > > Also, how do you know that the > MessageCursor seems to decide that there is not enough memory and stops > delivery of queue content to browsers/consumers? What symptom tells you > that? > On Jan 8, 2016 8:25 AM, "Klaus Pittig" <klaus.pit...@futura4retail.com> > wrote: > >> (related issue: https://issues.apache.org/jira/browse/AMQ-6115) >> >> There's a problem when Using ActiveMQ with a large number of Persistence >> Queues (250) á 1000 persistent TextMessages á 10 KB. >> >> Our scenario requires these messages to remain in the storage over a >> long time (days), until they are consumed (large amounts of data are >> staged for distribution for many consumer, that could be offline for >> some days). >> >> >> After the Persistence Store is filled with these Messages and after a >> broker restart we can browse/consume some Queues _until_ the >> #checkpoint call after 30 seconds. >> >> This call causes the broker to use all available memory and never >> releases it for other tasks such as Queue browse/consume. Internally the >> MessageCursor seems to decide, that there is not enough memory and stops >> delivery of queue content to browsers/consumers. >> >> >> => Is there a way to avoid this behaviour by configuration or is this a >> bug? >> >> The expectation is, that we can consume/browse any queue under all >> circumstances. >> >> Settings below are in production for some time now and several >> recommendations are applied found in the ActiveMQ documentation >> (destination policies, systemUsage, persistence store options etc.) >> >> - Behaviour is tested with ActiveMQ: 5.11.2, 5.13.0 and 5.5.1. >> - Memory Settings: Xmx=1024m >> - Java: 1.8 or 1.7 >> - OS: Windows, MacOS, Linux >> - PersistenceAdapter: KahaDB or LevelDB >> - Disc: enough free space (200 GB) and physical memory (16 GB max). >> >> Besides the above mentioned settings we use the following settings for >> the broker (btw: changing the memoryLimit to a lower value like 1mb does >> not change the situation): >> >> <destinationPolicy> >> <policyMap> >> <policyEntries> >> <policyEntry queue=">" producerFlowControl="false" >> optimizedDispatch="true" memoryLimit="128mb" >> timeBeforeDispatchStarts="1000"> >> <dispatchPolicy> >> <strictOrderDispatchPolicy /> >> </dispatchPolicy> >> <pendingQueuePolicy> >> <storeCursor /> >> </pendingQueuePolicy> >> </policyEntry> >> </policyEntries> >> </policyMap> >> </destinationPolicy> >> <systemUsage> >> <systemUsage sendFailIfNoSpace="true"> >> <memoryUsage> >> <memoryUsage limit="50 mb" /> >> </memoryUsage> >> <storeUsage> >> <storeUsage limit="80000 mb" /> >> </storeUsage> >> <tempUsage> >> <tempUsage limit="1000 mb" /> >> </tempUsage> >> </systemUsage> >> </systemUsage> >> >> If we set the **cursorMemoryHighWaterMark** in the destinationPolicy to >> a higher value like **150** or **600** depending on the difference >> between memoryUsage and the available heap space relieves the situation >> a bit for a workaround, but this is not really an option for production >> systems in my point of view. >> >> Screenie with information from Oracle Mission Control showing those >> ActiveMQTextMessage instances that are never released from memory: >> >> http://goo.gl/EjEixV >> >> >> Cheers >> Klaus >> >