Hi Christian!

Yes, actually that's what I'm doing, just setting per destination policies
which work for me.
I anyway needed them because I'm creating queues with lots of messages,
which won't be immediately consumed, so having them store a lot into memory
ended up slowing things up.

So I just assigned enough memory to the broker so as not run into issues.

Thanks again.


On Tue, Nov 27, 2012 at 9:40 PM, Christian Posta
<christian.po...@gmail.com>wrote:

> See inline...
>
>
> On Wed, Nov 21, 2012 at 12:04 PM, Juan Nin <jua...@gmail.com> wrote:
>
> > Hi!
> >
> > Sorry for the delay in replying, buried on a project.
> >
> > As I mentioned before, I had tested this with 5.7.0 with the same
> > behaviour.
> > I just tested it again (both with 5.3.2 and 5.7.0) and same thing, and on
> > my case it doesn't matter if there are consumers or not, it always seems
> to
> > make usage of the memory.
> >
> > Although I guess in theory that should not affect, did you use Stomp for
> > your testing, or maybe you used Openwire?
> > I'm using Stomp for my testing.
> >
> > Might be though that the broker's memory itself is not going beyond 70%
> of
> > memoryUsage, but this is just per destination counters as you mentioned.
> > In which case I guess the value shown as "Memory percent used" is a bit
> > confusing... But haven't had much time to really test the possibility of
> > exhausting the broker's memory.
> >
> No, i believe what you're seeing is correct. The broker's memory limit is
> going beyond memoryUsage (way beyond). When a queue checks whether memory
> is full, it will only do something interesting if producer flow control is
> enabled. Otherwise, it will continue on. You are seeing that it will
> continue to add messages until the Queue's memory limit (40MB) reaches the
> 70% mark. Since MemoryUsages are hierarchical, this means it will also
> account for messages in the overall broker memory as well. For each queue,
> you'll see that it will continue to hold 70% of 40MB of memory. What you
> want in this case (if there are no consumers, or slow consumers) is to
> raise your system usage memory limit OR lower your per-destination limits
> OR lower your cursor highwatermark or a combination of all three.
>
> http://activemq.apache.org/per-destination-policies.html
>
> With PFC turned off, you're essentially telling the broker to take the
> message no matter what. There is a point at which you will run out of
> resources (memory, disk, etc). The trick is to find your use case and tune
> for that.
>
>
> >
> > Will try to do some more testing soon...
> >
> > Thanks
> >
> >
> > On Wed, Nov 21, 2012 at 2:28 PM, Christian Posta
> > <christian.po...@gmail.com>wrote:
> >
> > > Can you please try on 5.7?
> > > I just tried a test, and if there are no consumers to the queue then
> the
> > > memory usage will stay at 0%. The message will not be retained, ie, it
> > will
> > > be put into the store and kept there. If I add a consumer, and not try
> to
> > > consume, the message will be kept around in memory up to the cursor
> high
> > > watermark (70 by default).
> > >
> > > As I add more queues the same behavior as described above will happen.
> > If I
> > > attach consumers to the queues without consuming them (so no messages
> are
> > > consumed), then messages are kept in the cursor up to the high-water
> > > mark... note.. the high-water mark is relative to the
> > Destination/Cursor's
> > > MemoryUsage, not the global memory usage.
> > >
> > > If I continue adding queues, and with producer flow control set to
> > false, I
> > > too will see the *Global* memory usage go much higher than 100%. This
> is
> > > not surprising though, because as I understand, these usage memory
> > objects
> > > are really just counters. They don't enforce anything. When coupled
> with
> > > producer flow control, they can be used to determine when to enable
> PFC.
> > If
> > > PFC is false, it's up to the cursor to determine when to flush out to
> > disk.
> > > But each destination/cursor will have it's own system usage (with the
> > > global as the parent).
> > >
> > > Hope this helps. Can you please try with 5.7 and give us a report back?
> > > Thanks,
> > > Christian
> > >
> > >
> > >
> > > On Fri, Nov 16, 2012 at 11:38 AM, Juan Nin <jua...@gmail.com> wrote:
> > >
> > > > nope, adding a 3rd queue the 3rd one also gets this same value, so
> even
> > > if
> > > > it's the memory usage of the queue it's anyway going beyond..
> > > >
> > > >
> > > > On Fri, Nov 16, 2012 at 4:32 PM, Juan Nin <jua...@gmail.com> wrote:
> > > >
> > > > > Might it be just a bug on how the MemoryPercentUsage is calculated?
> > > > >
> > > > > If I connect via JMX using console, I can see the
> MemoryPercentUsage
> > as
> > > > > 112 right now.
> > > > > If I go to each of the 2 queues on them I see CursorMemoryUsage
> with
> > > > value
> > > > > 29360604, which would be 28mb each, summing a total of 56mb (just a
> > bit
> > > > > more than the specified memoryUsage of 50mb).
> > > > >
> > > > > Not sure I'm interpreting these values correctly though, first
> time I
> > > > > access it via jconsole...
> > > > >
> > > > >
> > > > > On Fri, Nov 16, 2012 at 4:07 PM, Juan Nin <jua...@gmail.com>
> wrote:
> > > > >
> > > > >> On that config there's a 40mb memoryLimit per queue, but also
> tested
> > > it
> > > > >> without it with same results.
> > > > >>
> > > > >>
> > > > >> On Fri, Nov 16, 2012 at 4:05 PM, Juan Nin <jua...@gmail.com>
> wrote:
> > > > >>
> > > > >>> Hi Torsten!
> > > > >>>
> > > > >>> I'm using ActiveMQ 5.3.2, but also tested it on 5.7.0 with the
> same
> > > > >>> results...
> > > > >>> This is my 5.3.2 config:
> > > > >>>
> > > > >>> <beans
> > > > >>>   xmlns="http://www.springframework.org/schema/beans";
> > > > >>>   xmlns:amq="http://activemq.apache.org/schema/core";
> > > > >>>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
> > > > >>>   xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> > > > >>> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
> > > > >>>   http://activemq.apache.org/schema/core
> > > > >>> http://activemq.apache.org/schema/core/activemq-core.xsd";>
> > > > >>>
> > > > >>>     <bean
> > > > >>>
> > > >
> > >
> >
> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
> > > > >>>         <property name="locations">
> > > > >>>
> > > > >>> <value>file:${activemq.base}/conf/credentials.properties</value>
> > > > >>>         </property>
> > > > >>>     </bean>
> > > > >>>
> > > > >>>     <broker xmlns="http://activemq.apache.org/schema/core";
> > > > >>> brokerName="localhost" dataDirectory="${activemq.base}/data"
> > > > >>> destroyApplicationContextOnStop="true" advisorySupport="false">
> > > > >>>
> > > > >>>         <destinationPolicy>
> > > > >>>             <policyMap>
> > > > >>>                 <policyEntries>
> > > > >>>                     <policyEntry topic=">"
> > producerFlowControl="true"
> > > > >>> memoryLimit="5mb">
> > > > >>>                         <pendingSubscriberPolicy>
> > > > >>>                             <vmCursor />
> > > > >>>                         </pendingSubscriberPolicy>
> > > > >>>                     </policyEntry>
> > > > >>>                     <policyEntry queue=">"
> > > producerFlowControl="false"
> > > > >>> optimizedDispatch="true" memoryLimit="40mb">
> > > > >>>                 <deadLetterStrategy>
> > > > >>>                             <individualDeadLetterStrategy
> > > > >>> queuePrefix="DLQ." useQueueForQueueMessages="true" />
> > > > >>>                         </deadLetterStrategy>
> > > > >>>                     </policyEntry>
> > > > >>>                 </policyEntries>
> > > > >>>             </policyMap>
> > > > >>>         </destinationPolicy>
> > > > >>>
> > > > >>>         <managementContext>
> > > > >>>             <managementContext connectorPort="2011"/>
> > > > >>>         </managementContext>
> > > > >>>
> > > > >>>         <persistenceAdapter>
> > > > >>>             <kahaDB directory="${activemq.base}/data/kahadb"
> > > > >>> enableJournalDiskSyncs="false" indexWriteBatchSize="10000"
> > > > >>> indexCacheSize="1000"/>
> > > > >>>         </persistenceAdapter>
> > > > >>>
> > > > >>>         <systemUsage>
> > > > >>>             <systemUsage>
> > > > >>>               <memoryUsage>
> > > > >>>                     <memoryUsage limit="50 mb"/>
> > > > >>>                  </memoryUsage>
> > > > >>>                 <storeUsage>
> > > > >>>                     <storeUsage limit="1 gb" name="foo"/>
> > > > >>>                 </storeUsage>
> > > > >>>                 <tempUsage>
> > > > >>>                     <tempUsage limit="3 gb"/>
> > > > >>>                 </tempUsage>
> > > > >>>             </systemUsage>
> > > > >>>         </systemUsage>
> > > > >>>
> > > > >>>         <transportConnectors>
> > > > >>>             <transportConnector name="openwire" uri="tcp://
> > > > 0.0.0.0:61616
> > > > >>> "/>
> > > > >>>     <transportConnector name="stomp" uri="stomp://0.0.0.0:61613
> "/>
> > > > >>>         </transportConnectors>
> > > > >>>
> > > > >>>     </broker>
> > > > >>>
> > > > >>>     <import resource="jetty.xml"/>
> > > > >>>
> > > > >>> </beans>
> > > > >>>
> > > > >>>
> > > > >>> Using just a simple PHP script with Stomp for feeding the queues
> > > > >>> (running it twice with different queue name):
> > > > >>>
> > > > >>> <?php
> > > > >>>
> > > > >>> require_once("Stomp.php");
> > > > >>>
> > > > >>> $amq = new Stomp("tcp://localhost:61613");
> > > > >>> $amq->connect();
> > > > >>>
> > > > >>> for($i=1; $i <= 100000; $i++)
> > > > >>> {
> > > > >>> if($i%1000 == 0)
> > > > >>> {
> > > > >>>  echo "\nmsg #: $i";
> > > > >>> }
> > > > >>> $amq->send("/queue/test", "this is test message # $i"
> > > > >>> ,array('persistent' => 'true'));
> > > > >>> }
> > > > >>>
> > > > >>> $amq->disconnect();
> > > > >>>
> > > > >>> ?>
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>> On Fri, Nov 16, 2012 at 3:47 PM, Torsten Mielke <
> > > > tors...@fusesource.com>wrote:
> > > > >>>
> > > > >>>> Hello,
> > > > >>>>
> > > > >>>> See in-line response.
> > > > >>>>
> > > > >>>> On Nov 16, 2012, at 6:29 PM, Juan Nin wrote:
> > > > >>>>
> > > > >>>> > Hi!
> > > > >>>> >
> > > > >>>> > After some heavy digging about Producer Flow control and the
> > > > >>>> systemUsage
> > > > >>>> > properties a couple of years ago, I thought I quite understood
> > it.
> > > > >>>> > But yesterday I found that one of my configs was not behaving
> > > > exactly
> > > > >>>> as I
> > > > >>>> > expected, so started doing some tests, and I see certain
> > > behaviours
> > > > >>>> which
> > > > >>>> > don't seem to match what the docs and posts that I find on the
> > > list
> > > > or
> > > > >>>> > other forums say.
> > > > >>>> >
> > > > >>>> > "storeUsage" is perfectly clear, it's the max space that
> > > persistent
> > > > >>>> > messages can use to be stored in disk.
> > > > >>>> > "tempUsage"" applies to file cursors on non-persistent
> messages,
> > > so
> > > > >>>> as to
> > > > >>>> > flush to disk if memory limits are reached (I don't care much
> > > about
> > > > >>>> this
> > > > >>>> > one anyway, I always use persistent messages).
> > > > >>>>
> > > > >>>> Correct.
> > > > >>>>
> > > > >>>> >
> > > > >>>> > Now, according to most posts, memoryUsage would be the maximum
> > > > memory
> > > > >>>> that
> > > > >>>> > the broker would be available to use.
> > > > >>>> > On this post:
> > > > >>>> >
> > > > >>>>
> > > >
> > >
> >
> http://stackoverflow.com/questions/7646057/activemq-destinationpolicy-and-systemusage-configurationit
> > > > >>>> > says that "memoryUsage corresponds to the amount of memory
> > that's
> > > > >>>> > assigned to the in-memory store".
> > > > >>>>
> > > > >>>> Correct.
> > > > >>>>
> > > > >>>> >
> > > > >>>> > For example, on my tests using the following config (only
> > showing
> > > > >>>> relevant
> > > > >>>> > parts):
> > > > >>>> >
> > > > >>>> > <policyEntry queue=">" producerFlowControl="false"
> > > > >>>> optimizedDispatch="true">
> > > > >>>> >    <deadLetterStrategy>
> > > > >>>> >        <individualDeadLetterStrategy queuePrefix="DLQ."
> > > > >>>> > useQueueForQueueMessages="true" />
> > > > >>>> >    </deadLetterStrategy>
> > > > >>>> > </policyEntry>
> > > > >>>> >
> > > > >>>> > <systemUsage>
> > > > >>>> >    <systemUsage>
> > > > >>>> >        <memoryUsage>
> > > > >>>> >            <memoryUsage limit="100 mb"/>
> > > > >>>> >        </memoryUsage>
> > > > >>>> >        <storeUsage>
> > > > >>>> >            <storeUsage limit="1 gb" name="foo"/>
> > > > >>>> >        </storeUsage>
> > > > >>>> >        <tempUsage>
> > > > >>>> >            <tempUsage limit="3 gb"/>
> > > > >>>> >        </tempUsage>
> > > > >>>> >    </systemUsage>
> > > > >>>> > </systemUsage>
> > > > >>>> >
> > > > >>>> > With that config I would expect the broker to use 100 mb of
> > > maximum
> > > > >>>> memory
> > > > >>>> > among all queues. So it could maybe use 30mb in one queue and
> > 70mb
> > > > in
> > > > >>>> > second queue.
> > > > >>>> >
> > > > >>>> >
> > > > >>>> > 1) What I'm seeing is that if I start feeding a queue without
> > > > >>>> consuming it,
> > > > >>>> > the "Memory percent used" grows up to 70%, after that it
> doesn't
> > > > grow
> > > > >>>> > anymore.
> > > > >>>> > What is it doing exactly there? The first 70% is stored in
> > memory
> > > > >>>> (apart
> > > > >>>> > from disk since it's persistent), and all the rest that
> > continues
> > > > >>>> being fed
> > > > >>>> > goes just to disk?
> > > > >>>>
> > > > >>>> This behavior is correct. For queues the default cursor is store
> > > > >>>> cursor. It keeps any newly arrived msgs in memory as long as it
> > does
> > > > not
> > > > >>>> reach the configured memory limit (either configured on the
> queue
> > > per
> > > > >>>> destination or globally in memoryUsage settings).
> > > > >>>> Once the cursor reaches 70% of the configured limit (in your
> case
> > of
> > > > >>>> the memoryUsage limit since you don't specify a per-destination
> > > > limit), it
> > > > >>>> will not keep any more messages in memory.
> > > > >>>> Instead it will reload these messages from the store when its
> time
> > > to
> > > > >>>> dispatch them. The broker anyway persists any msgs it receives
> > > before
> > > > >>>> passing on to the cursor.
> > > > >>>> This limit of 70% can be configured and raised to e..g 100%.
> > > > >>>> This behavior is kind of an optimization. That way you run less
> > > often
> > > > >>>> into producer-flow-control.
> > > > >>>> As long as the persistence store is not running full, there is
> no
> > > need
> > > > >>>> to block producers, since the cursor can also load the messages
> > from
> > > > the
> > > > >>>> store and does not necessarily have to keep them in memory.
> > > > >>>> If you configure the vmQueueCursor, then the behavior is
> > different.
> > > > >>>> This cursor will not be able to load msgs to the store but needs
> > to
> > > > keep
> > > > >>>> them all in memory. The vmQueueCursor used to be the default
> > cursor
> > > in
> > > > >>>> older version of AMQ.
> > > > >>>>
> > > > >>>> Also note that topic msgs and non-persistent queue messages are
> > not
> > > > >>>> handled by the store cursor. These msgs are held in memory and
> if
> > > > memory
> > > > >>>> runs low, get swapped out to temp storage.
> > > > >>>>
> > > > >>>> > 2) If then I start feeding a 2nd queue, "Memory percent used"
> > > > >>>> continues
> > > > >>>> > growing until it reaches 140%. So it looks like memoryUsage
> does
> > > not
> > > > >>>> apply
> > > > >>>> > globally, but on a per queue basis?
> > > > >>>>
> > > > >>>> What version of AMQ do you use? The sum of the memory usage of
> all
> > > > >>>> queues should not go any higher than the configured memoryUsage
> > > > limit. If
> > > > >>>> you're not on 5.5.1 or higher releases, then I suggest to
> upgrade.
> > > > >>>>
> > > > >>>> > Using memoryLimit on the queue's policyEntry gives more
> control
> > > over
> > > > >>>> this,
> > > > >>>> > but it's just a variation, "Memory percent used" can grow more
> > > than
> > > > >>>> 100%
> > > > >>>> > anyway.
> > > > >>>>
> > > > >>>> With the default store cursor this should not be the case from
> > what
> > > I
> > > > >>>> know.
> > > > >>>>
> > > > >>>>
> > > > >>>> >
> > > > >>>> > 3) If #2 is true, then how would I prevent the broker from
> > running
> > > > >>>> out of
> > > > >>>> > memory in case queues would continue to be created?
> > > > >>>>
> > > > >>>> Just like above comment. I would expect the brokers
> > > MemoryPercentUsage
> > > > >>>> won't grow over 100% and the destinations MemoryPercentUsage
> > remains
> > > > fairly
> > > > >>>> much at 70%.
> > > > >>>> Not sure why you would see a different behavior? Using an old
> > > version
> > > > >>>> of AMQ perhaps? Or explicitly configuring for the vmQueueCursor?
> > > > >>>> Could you perhaps also test with
> > > > >>>>
> > > > >>>> >
> > > > >>>> >
> > > > >>>> > Maybe I'm misunderstanding and some of these settings make no
> > > sense
> > > > >>>> when
> > > > >>>> > producerFlowControl is disabled?
> > > > >>>> >
> > > > >>>> > Thanks in advance.
> > > > >>>> >
> > > > >>>> > Juan
> > > > >>>>
> > > > >>>>
> > > > >>>> Regards,
> > > > >>>>
> > > > >>>> Torsten Mielke
> > > > >>>> tors...@fusesource.com
> > > > >>>> tmielke.blogspot.com
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>
> > > > >>
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > *Christian Posta*
> > > http://www.christianposta.com/blog
> > > twitter: @christianposta
> > >
> >
>
>
>
> --
> *Christian Posta*
> http://www.christianposta.com/blog
> twitter: @christianposta
>

Reply via email to