nope, adding a 3rd queue the 3rd one also gets this same value, so even if
it's the memory usage of the queue it's anyway going beyond..


On Fri, Nov 16, 2012 at 4:32 PM, Juan Nin <jua...@gmail.com> wrote:

> Might it be just a bug on how the MemoryPercentUsage is calculated?
>
> If I connect via JMX using console, I can see the MemoryPercentUsage as
> 112 right now.
> If I go to each of the 2 queues on them I see CursorMemoryUsage with value
> 29360604, which would be 28mb each, summing a total of 56mb (just a bit
> more than the specified memoryUsage of 50mb).
>
> Not sure I'm interpreting these values correctly though, first time I
> access it via jconsole...
>
>
> On Fri, Nov 16, 2012 at 4:07 PM, Juan Nin <jua...@gmail.com> wrote:
>
>> On that config there's a 40mb memoryLimit per queue, but also tested it
>> without it with same results.
>>
>>
>> On Fri, Nov 16, 2012 at 4:05 PM, Juan Nin <jua...@gmail.com> wrote:
>>
>>> Hi Torsten!
>>>
>>> I'm using ActiveMQ 5.3.2, but also tested it on 5.7.0 with the same
>>> results...
>>> This is my 5.3.2 config:
>>>
>>> <beans
>>>   xmlns="http://www.springframework.org/schema/beans";
>>>   xmlns:amq="http://activemq.apache.org/schema/core";
>>>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>>>   xsi:schemaLocation="http://www.springframework.org/schema/beans
>>> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
>>>   http://activemq.apache.org/schema/core
>>> http://activemq.apache.org/schema/core/activemq-core.xsd";>
>>>
>>>     <bean
>>> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>>>         <property name="locations">
>>>
>>> <value>file:${activemq.base}/conf/credentials.properties</value>
>>>         </property>
>>>     </bean>
>>>
>>>     <broker xmlns="http://activemq.apache.org/schema/core";
>>> brokerName="localhost" dataDirectory="${activemq.base}/data"
>>> destroyApplicationContextOnStop="true" advisorySupport="false">
>>>
>>>         <destinationPolicy>
>>>             <policyMap>
>>>                 <policyEntries>
>>>                     <policyEntry topic=">" producerFlowControl="true"
>>> memoryLimit="5mb">
>>>                         <pendingSubscriberPolicy>
>>>                             <vmCursor />
>>>                         </pendingSubscriberPolicy>
>>>                     </policyEntry>
>>>                     <policyEntry queue=">" producerFlowControl="false"
>>> optimizedDispatch="true" memoryLimit="40mb">
>>>                 <deadLetterStrategy>
>>>                             <individualDeadLetterStrategy
>>> queuePrefix="DLQ." useQueueForQueueMessages="true" />
>>>                         </deadLetterStrategy>
>>>                     </policyEntry>
>>>                 </policyEntries>
>>>             </policyMap>
>>>         </destinationPolicy>
>>>
>>>         <managementContext>
>>>             <managementContext connectorPort="2011"/>
>>>         </managementContext>
>>>
>>>         <persistenceAdapter>
>>>             <kahaDB directory="${activemq.base}/data/kahadb"
>>> enableJournalDiskSyncs="false" indexWriteBatchSize="10000"
>>> indexCacheSize="1000"/>
>>>         </persistenceAdapter>
>>>
>>>         <systemUsage>
>>>             <systemUsage>
>>>               <memoryUsage>
>>>                     <memoryUsage limit="50 mb"/>
>>>                  </memoryUsage>
>>>                 <storeUsage>
>>>                     <storeUsage limit="1 gb" name="foo"/>
>>>                 </storeUsage>
>>>                 <tempUsage>
>>>                     <tempUsage limit="3 gb"/>
>>>                 </tempUsage>
>>>             </systemUsage>
>>>         </systemUsage>
>>>
>>>         <transportConnectors>
>>>             <transportConnector name="openwire" uri="tcp://0.0.0.0:61616
>>> "/>
>>>     <transportConnector name="stomp" uri="stomp://0.0.0.0:61613"/>
>>>         </transportConnectors>
>>>
>>>     </broker>
>>>
>>>     <import resource="jetty.xml"/>
>>>
>>> </beans>
>>>
>>>
>>> Using just a simple PHP script with Stomp for feeding the queues
>>> (running it twice with different queue name):
>>>
>>> <?php
>>>
>>> require_once("Stomp.php");
>>>
>>> $amq = new Stomp("tcp://localhost:61613");
>>> $amq->connect();
>>>
>>> for($i=1; $i <= 100000; $i++)
>>> {
>>> if($i%1000 == 0)
>>> {
>>>  echo "\nmsg #: $i";
>>> }
>>> $amq->send("/queue/test", "this is test message # $i"
>>> ,array('persistent' => 'true'));
>>> }
>>>
>>> $amq->disconnect();
>>>
>>> ?>
>>>
>>>
>>>
>>> On Fri, Nov 16, 2012 at 3:47 PM, Torsten Mielke 
>>> <tors...@fusesource.com>wrote:
>>>
>>>> Hello,
>>>>
>>>> See in-line response.
>>>>
>>>> On Nov 16, 2012, at 6:29 PM, Juan Nin wrote:
>>>>
>>>> > Hi!
>>>> >
>>>> > After some heavy digging about Producer Flow control and the
>>>> systemUsage
>>>> > properties a couple of years ago, I thought I quite understood it.
>>>> > But yesterday I found that one of my configs was not behaving exactly
>>>> as I
>>>> > expected, so started doing some tests, and I see certain behaviours
>>>> which
>>>> > don't seem to match what the docs and posts that I find on the list or
>>>> > other forums say.
>>>> >
>>>> > "storeUsage" is perfectly clear, it's the max space that persistent
>>>> > messages can use to be stored in disk.
>>>> > "tempUsage"" applies to file cursors on non-persistent messages, so
>>>> as to
>>>> > flush to disk if memory limits are reached (I don't care much about
>>>> this
>>>> > one anyway, I always use persistent messages).
>>>>
>>>> Correct.
>>>>
>>>> >
>>>> > Now, according to most posts, memoryUsage would be the maximum memory
>>>> that
>>>> > the broker would be available to use.
>>>> > On this post:
>>>> >
>>>> http://stackoverflow.com/questions/7646057/activemq-destinationpolicy-and-systemusage-configurationit
>>>> > says that "memoryUsage corresponds to the amount of memory that's
>>>> > assigned to the in-memory store".
>>>>
>>>> Correct.
>>>>
>>>> >
>>>> > For example, on my tests using the following config (only showing
>>>> relevant
>>>> > parts):
>>>> >
>>>> > <policyEntry queue=">" producerFlowControl="false"
>>>> optimizedDispatch="true">
>>>> >    <deadLetterStrategy>
>>>> >        <individualDeadLetterStrategy queuePrefix="DLQ."
>>>> > useQueueForQueueMessages="true" />
>>>> >    </deadLetterStrategy>
>>>> > </policyEntry>
>>>> >
>>>> > <systemUsage>
>>>> >    <systemUsage>
>>>> >        <memoryUsage>
>>>> >            <memoryUsage limit="100 mb"/>
>>>> >        </memoryUsage>
>>>> >        <storeUsage>
>>>> >            <storeUsage limit="1 gb" name="foo"/>
>>>> >        </storeUsage>
>>>> >        <tempUsage>
>>>> >            <tempUsage limit="3 gb"/>
>>>> >        </tempUsage>
>>>> >    </systemUsage>
>>>> > </systemUsage>
>>>> >
>>>> > With that config I would expect the broker to use 100 mb of maximum
>>>> memory
>>>> > among all queues. So it could maybe use 30mb in one queue and 70mb in
>>>> > second queue.
>>>> >
>>>> >
>>>> > 1) What I'm seeing is that if I start feeding a queue without
>>>> consuming it,
>>>> > the "Memory percent used" grows up to 70%, after that it doesn't grow
>>>> > anymore.
>>>> > What is it doing exactly there? The first 70% is stored in memory
>>>> (apart
>>>> > from disk since it's persistent), and all the rest that continues
>>>> being fed
>>>> > goes just to disk?
>>>>
>>>> This behavior is correct. For queues the default cursor is store
>>>> cursor. It keeps any newly arrived msgs in memory as long as it does not
>>>> reach the configured memory limit (either configured on the queue per
>>>> destination or globally in memoryUsage settings).
>>>> Once the cursor reaches 70% of the configured limit (in your case of
>>>> the memoryUsage limit since you don't specify a per-destination limit), it
>>>> will not keep any more messages in memory.
>>>> Instead it will reload these messages from the store when its time to
>>>> dispatch them. The broker anyway persists any msgs it receives before
>>>> passing on to the cursor.
>>>> This limit of 70% can be configured and raised to e..g 100%.
>>>> This behavior is kind of an optimization. That way you run less often
>>>> into producer-flow-control.
>>>> As long as the persistence store is not running full, there is no need
>>>> to block producers, since the cursor can also load the messages from the
>>>> store and does not necessarily have to keep them in memory.
>>>> If you configure the vmQueueCursor, then the behavior is different.
>>>> This cursor will not be able to load msgs to the store but needs to keep
>>>> them all in memory. The vmQueueCursor used to be the default cursor in
>>>> older version of AMQ.
>>>>
>>>> Also note that topic msgs and non-persistent queue messages are not
>>>> handled by the store cursor. These msgs are held in memory and if memory
>>>> runs low, get swapped out to temp storage.
>>>>
>>>> > 2) If then I start feeding a 2nd queue, "Memory percent used"
>>>> continues
>>>> > growing until it reaches 140%. So it looks like memoryUsage does not
>>>> apply
>>>> > globally, but on a per queue basis?
>>>>
>>>> What version of AMQ do you use? The sum of the memory usage of all
>>>> queues should not go any higher than the configured memoryUsage limit. If
>>>> you're not on 5.5.1 or higher releases, then I suggest to upgrade.
>>>>
>>>> > Using memoryLimit on the queue's policyEntry gives more control over
>>>> this,
>>>> > but it's just a variation, "Memory percent used" can grow more than
>>>> 100%
>>>> > anyway.
>>>>
>>>> With the default store cursor this should not be the case from what I
>>>> know.
>>>>
>>>>
>>>> >
>>>> > 3) If #2 is true, then how would I prevent the broker from running
>>>> out of
>>>> > memory in case queues would continue to be created?
>>>>
>>>> Just like above comment. I would expect the brokers MemoryPercentUsage
>>>> won't grow over 100% and the destinations MemoryPercentUsage remains fairly
>>>> much at 70%.
>>>> Not sure why you would see a different behavior? Using an old version
>>>> of AMQ perhaps? Or explicitly configuring for the vmQueueCursor?
>>>> Could you perhaps also test with
>>>>
>>>> >
>>>> >
>>>> > Maybe I'm misunderstanding and some of these settings make no sense
>>>> when
>>>> > producerFlowControl is disabled?
>>>> >
>>>> > Thanks in advance.
>>>> >
>>>> > Juan
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Torsten Mielke
>>>> tors...@fusesource.com
>>>> tmielke.blogspot.com
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to