Hi Peter, cursors are used by persistent messages too. The error about paged in messages in the jira issue is a bit of a red herring - but I did see similar behaviour to what you describe when before I resolved that issue
cheers, Rob On 25 Mar 2010, at 13:50, Pothier, Peter wrote: > Hi Rob, > > Sure, I'll dig up some time to try 5.3.1. > > FYI, I do not see the > > ERROR | Failed to page in more queue messages > > type messages. I only see > > INFO | Slow KahaDB access: Journal append took... > WARN | KahaDB PageFile flush: XXX queued writes, latch wait > took... > > type messages in the log. So I'm not sure it's the same issue. > Besides, I'm using Persistent Messages. Are the Cursers Memories > related to non-persistent messages? > > > What about the Enque/Deque counters. Is there a good explanation > somewhere? > I'm still troubled that the Topic has no dequeues. Should I be? > > Peter P > > -----Original Message----- > From: Rob Davies [mailto:rajdav...@gmail.com] > Sent: Thursday, March 25, 2010 9:39 AM > To: users@activemq.apache.org > Cc: Pothier, Peter > Subject: Re: ActiveMQ 5.3.0 Memory Usage - Connections > > This could be due to https://issues.apache.org/activemq/browse/AMQ-2512 > - could you try 5.3.1 ? > > On 25 Mar 2010, at 13:28, Pothier, Peter wrote: > >> Hi, >> >> I finally figured out how to use jconsole remotely (I had a >> misunderstanding >> of what value to use in -Djava.rmi.server.hostname=<host>, using the >> jconsole's >> machine's IP address instead of the target). >> >> Going back to running unit tests based on both libstomp and > activemq-cpp >> (2.2.1), >> I can see, using jconsole, the Heap usage continuously rise (albeit > with >> a sawtooth), >> and then reach its limit. At this point the unit tests halt. What >> seems >> the most interesting of all the Memory Pools is the "Tenured Gen" > which >> eventually >> plateau's when the Used=Committed=Max. The activemq.log, which >> periodically has >> the KahaDB slow messages or PageFile flush messages, suddenly stops. > No >> interesting >> messages. (by the way I reduced the heap down to 64M to get it to >> saturate quicker). >> >> I'm not really sure where to look. So I took a look at the MBeans. > The >> AMQ-BROKER >> attributes shows >> >> StorePercentUsage = 56 >> TotalDequeueCount = 214618 >> TotalEnqueueCount = 429046 >> TotalMessageCount = 214428 >> MemoryLimit = 20971520 >> StoreLimit = 104857600 >> TotalConsumerCount=2 >> >> I've read a little about the Total Enqueue/Message/Dequeue counters, > but >> still >> don't understand how they relate to each other. A picture would be >> worth a >> thousand words. >> >> The setup is fairly simple right now, sending persistent messages. >> >> Producer - - > JMS Queue - - > Server - - > JMS Durable Topic - - >> >> Consumer >> >> The Queue shows >> >> DequeueCount = 214618 >> DispatchCount = 214618 >> EnqueCount = 214618 >> MemoryPercentUsage = 0 >> >> The Topic seems more interesting >> >> DequeueCount = 0 >> DispatchCount = 214236 >> EnqueueCount = 214428 >> MemoryPercentUsage = 0 >> >> >> Is it strange that the Dequeue Count for the Topic is zero? I know > the >> consumer of the Topic is receiving messages. Why would the Dequeue >> Count >> be zero? Is the consumer suppose to be doing something that it's not? >> >> Any other places in jconsole I should be looking to determine where > all >> the heap is going? >> >> Thanks! >> >> Peter P >> >> >> -----Original Message----- >> From: Peter P [mailto:ppoth...@crossbeamsys.com] >> Sent: Wednesday, March 17, 2010 5:56 PM >> To: users@activemq.apache.org >> Subject: ActiveMQ 5.3.0 Memory Usage - Connections >> >> >> Hi, >> >> We are using ActiveMQ 5.3.0, with both libstomp and ActiveMQ-CPP >> producer >> and consumer clients, >> sending persistent messages to both queues and topics. We have > noticed >> the >> amount of memory used >> by ActiveMQ (reported by linux top) grows over time. >> >> Trying to determine whether the memory grew in response to messages or >> connections, we noticed >> using a python script based on stomppy-2.0.4-1cb, that ActiveMQ memory >> grew >> rather quickly when >> we connected and disconnected multiple times. >> >> Here's the basis of the script >> >> for i in range (0,1000) : >> conn = stomp.Connection() >> conn.set_listener('', MyListener()) >> conn.start() >> conn.connect() >> conn.disconnect() >> >> Using jmap/jhat, here's the most popular Instance Counts and Histogram >> prior >> to running the script >> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND >> >> 29892 root 19 0 823m 95m 10m S 0.0 2.4 0:07.86 java >> >> >> All Classes (excluding platform) >> Class Instance Count Total Size >> class [B 6288 7214080 >> class [C 26005 2285484 >> class [I 4775 916612 >> class java.lang.reflect.Method 6971 899259 >> class java.lang.Class 4802 691488 >> class java.lang.String 25732 514640 >> class [Ljava.util.HashMap$Entry; 3240 514504 >> class [S 6384 396498 >> class [Ljava.lang.Object; 4263 378984 >> class [Lorg.apache.activemq.command.DataStructure; 2 262160 >> class java.util.LinkedHashMap$Entry 4471 196724 >> >> >> Instance Counts for All Classes (including platform) >> 26005 instances of class [C >> 25732 instances of class java.lang.String >> 6971 instances of class java.lang.reflect.Method >> 6741 instances of class [Ljava.lang.Class; >> 6384 instances of class [S >> 6288 instances of class [B >> 5806 instances of class java.util.HashMap$Entry >> 4802 instances of class java.lang.Class >> 4775 instances of class [I >> 4471 instances of class java.util.LinkedHashMap$Entry >> 4263 instances of class [Ljava.lang.Object; >> >> >> >> >> and after running the script a bunch of times >> >> >> >> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND >> >> 29892 root 18 0 880m 153m 10m S 0.0 3.9 0:35.62 java >> >> Heap Histogram >> >> All Classes (excluding platform) >> Class Instance Count Total Size >> class [B 9880 16696961 >> class [C 32597 3238584 >> class [I 4823 2948344 >> class java.util.concurrent.ConcurrentHashMap$Segment 65216 2086912 >> class java.util.concurrent.locks.ReentrantLock$NonfairSync 66712 >> 1867936 >> class [Ljava.util.concurrent.ConcurrentHashMap$HashEntry; 65216 >> 1570048 >> class [Ljava.util.HashMap$Entry; 6575 1022200 >> class java.lang.reflect.Method 6895 889455 >> class java.lang.Class 4832 695808 >> class java.lang.String 31143 622860 >> class [Ljava.util.concurrent.ConcurrentHashMap$Segment; 4076 >> 586944 >> class org.apache.activemq.command.ActiveMQMessage 1812 467496 >> class [S 6077 370234 >> class java.util.HashMap$Entry 11704 327712 >> class [Ljava.lang.Object; 5130 318360 >> class java.net.SocksSocketImpl 1755 282555 >> class [Lorg.apache.activemq.command.DataStructure; 2 262160 >> class java.util.HashMap 5182 248736 >> class java.util.concurrent.ConcurrentHashMap 4076 228256 >> class java.util.LinkedHashMap$Entry 4155 182820 >> >> Instance Counts for All Classes (including platform) >> 66712 instances of class >> java.util.concurrent.locks.ReentrantLock$NonfairSync >> 65216 instances of class > java.util.concurrent.ConcurrentHashMap$Segment >> 65216 instances of class >> [Ljava.util.concurrent.ConcurrentHashMap$HashEntry; >> 32597 instances of class [C >> 31143 instances of class java.lang.String >> 11704 instances of class java.util.HashMap$Entry >> 9880 instances of class [B >> 9371 instances of class java.lang.Object >> 6895 instances of class java.lang.reflect.Method >> 6575 instances of class [Ljava.util.HashMap$Entry; >> 6143 instances of class [Ljava.lang.Class; >> 6077 instances of class [S >> 5182 instances of class java.util.HashMap >> 5130 instances of class [Ljava.lang.Object; >> 4832 instances of class java.lang.Class >> 4823 instances of class [I >> >> >> >> >> Here are the diffs between our config file and activemq-demo.xml >> >> >> >> 51c51 >> < <broker xmlns="http://activemq.apache.org/schema/core" >> brokerName="amq-broker" useJmx="true"> >> --- >>> <broker xmlns="http://activemq.apache.org/schema/core" >>> brokerName="amq-broker" persistent="true" useJmx="true"> >> 68c68 >> < <policyEntry queue=">" > producerFlowControl="true" >> memoryLimit="5mb"/> >> --- >>> <policyEntry queue=">" producerFlowControl="false" >>> memoryLimit="5mb"/> >> 76a77,79 >>> <messageEvictionStrategy> >>> <oldestMessageEvictionStrategy/> >>> </messageEvictionStrategy> >> 81d83 >> < --> >> 82a85,87 >>> <timedSubscriptionRecoveryPolicy >>> recoverDuration="60000" /> >>> --> >>> <fixedCountSubscriptionRecoveryPolicy >>> maximumSize="300" /> >> 83a89 >>> >> 88a95,96 >>> >>> >> 197c205 >> < <!-- Create a TCP transport that is advertised on via an >> IP >> multicast >> --- >>> <!-- Create a TCP transport that is NOT advertised on via >> an >>> IP multicast >> 199c207 >> < <transportConnector name="openwire" >> uri="tcp://localhost:61616" discoveryUri="multicast://default"/> >> --- >>> <transportConnector name="openwire" >>> >> > uri="tcp://localhost:61616?transport.keepAliveResponseRequired=true;wire >> Format.tcpNoDelayEnabled=true"/> >> 204c212 >> < <transportConnector name="stomp" >> uri="stomp://localhost:61613"/> >> --- >>> <transportConnector name="stomp" >>> uri="stomp://localhost:61613?wireFormat.tcpNoDelayEnabled=true"/> >> 208a217,219 >>> >>> >>> >> 325c336,337 >> < </beans> >> \ No newline at end of file >> --- >>> >>> </beans> >> >> >> Checking the ActiveMQ 5.3.1 Fixed Issues page >> >> http://issues.apache.org/activemq/browse/AMQ/fixforversion/12183 >> >> this sounds different from any issue. >> >> We run with only a single instance of ActiveMQ. >> >> Are there any configuration parameters that controls this behavior? >> Does ActiveMQ normally grow this large, cleaning up periodically? >> >> Is this normal behavior? Is there something I should be looking for? >> >> Thanks, >> >> Peter P >> >> >> -- >> View this message in context: >> > http://old.nabble.com/ActiveMQ-5.3.0-Memory-Usage---Connections-tp279378 >> 10p27937810.html >> Sent from the ActiveMQ - User mailing list archive at Nabble.com. >> >