Also, it strikes me that if you're cycling the broker every four hours
because of this behavior, you can probably afford the negligible
performance hit that comes with enabling JMX for the period of time before
you figure out what's going on.

And you should probably try to prove that there's actually a measurable
performance improvement by turning it off, because you're paying a high
price right now for not having it enabled, and I'm very skeptical that
you'll be able to tell the difference performance-wise. So I think this
might be a premature optimization.

Tim

On Apr 27, 2017 8:53 PM, "Tim Bain" <tb...@alumni.duke.edu> wrote:

> Can you do all those things before the broker becomes unresponsive (e.g.
> after 2 or 3 hours)? If so, that might tell us something useful, and maybe
> you can script it to happen periodically (once a minute, for example) so
> you can see what things look like just before it becomes unresponsive.
>
> What do system monitoring tools like top tell you about which system
> resources (CPU, memory, disk IO, network IO) you are or are not using
> heavily?
>
> To me, some of these symptoms sound like maybe you're filling memory to
> the point that you're spending so much time GCing that the JVM doesn't have
> cycles to do much else. There are JVM arguments that will let you log the
> details of your GC activity to try to confirm or refute that theory (Google
> for them, or check the Oracle documentation). Others of the symptoms just
> sound really strange and I don't have any guesses about what they mean, but
> I assume you Googled for each one and found no useful results?
>
> Tim
>
> On Apr 27, 2017 8:17 AM, "Shobhana" <shobh...@quickride.in> wrote:
>
>> We use ActiveMQ 5.14.1 with KahaDB for persistence.
>> From last 2 days, we have observed that the broker becomes unresponsive
>> after running for around 4 hours! None of the operations work after this
>> including establishing new connections, publishing messages to topics,
>> subscribing to topics, unsubscribing, etc.
>>
>> We have disabled JMX for performance reasons; so I cannot check the
>> status/health of broker from JMX. I tried to take thread dump to see
>> what's
>> happening, but it fails with a message : Unable to open socket file:
>> target
>> process not responding or HotSpot VM not loaded!
>>
>> Similar error when I try to take heap dump! But I can see that broker
>> process is running using ps -ef |grep java option.
>>
>> Tried to take the thread dump forcefully using the -F option, but this
>> also
>> fails with "java.lang.RuntimeException: Unable to deduce type of thread
>> from
>> address 0x00007f9288012800 (expected type JavaThread, CompilerThread,
>> ServiceThread, JvmtiAgentThread, or SurrogateLockerThread)"
>>
>> Forceful Heap dump fails with "Expecting GenCollectedHeap,
>> G1CollectedHeap,
>> or ParallelScavengeHeap, but got sun.jvm.hotspot.gc_interface.C
>> ollectedHeap"
>>
>> We have just one broker running on AWS EC2 Ubuntu instance. Broker is
>> started with Xmx of 12 GB. Our server and Android applications together
>> create thousands of topics and exchange MQTT messages (both persistent and
>> non-persistent). Within 4 hours, around 20 GB of journal files got created
>> in the last run before broker became unresponsive! The only way to
>> overcome
>> this problem is to stop the broker, delete all files in KahaDB and restart
>> the broker!
>>
>> Any hints to what could be going wrong is highly appreciated!
>>
>> Broker configurations is given below for reference :
>>
>> <broker xmlns="http://activemq.apache.org/schema/core"; useJmx="false"
>> brokerName="PrimaryBroker" deleteAllMessagesOnStartup="false"
>> advisorySupport="false" schedulePeriodForDestinationPurge="600000"
>> offlineDurableSubscriberTimeout="54000000"
>> offlineDurableSubscriberTaskSchedule="3600000"
>> dataDirectory="${activemq.data}">
>>
>>         <destinationPolicy>
>>             <policyMap>
>>               <policyEntries>
>>                 <policyEntry topic=">" gcInactiveDestinations="true"
>> inactiveTimoutBeforeGC="3600000">
>>                   <pendingMessageLimitStrategy>
>>                     <constantPendingMessageLimitStrategy limit="1000"/>
>>                   </pendingMessageLimitStrategy>
>>                   <deadLetterStrategy>
>>                     <sharedDeadLetterStrategy processExpired="false" />
>>                   </deadLetterStrategy>
>>                 </policyEntry>
>>                 <policyEntry queue=">" optimizedDispatch="true"
>> reduceMemoryFootprint="true">
>>                   <deadLetterStrategy>
>>                     <sharedDeadLetterStrategy processExpired="false" />
>>                   </deadLetterStrategy>
>>                 </policyEntry>
>>               </policyEntries>
>>             </policyMap>
>>         </destinationPolicy>
>>
>>         <persistenceAdapter>
>>             <kahaDB directory="${activemq.data}/kahadb"
>> indexCacheSize="20000" enableJournalDiskSyncs="false"
>> ignoreMissingJournalfiles="true"/>
>>         </persistenceAdapter>
>>
>>           <systemUsage>
>>             <systemUsage sendFailIfNoSpaceAfterTimeout="10000">
>>                 <memoryUsage>
>>                     <memoryUsage percentOfJvmHeap="70" />
>>                 </memoryUsage>
>>                 <storeUsage>
>>                     <storeUsage limit="100 gb"/>
>>                 </storeUsage>
>>                 <tempUsage>
>>                     <tempUsage limit="50 gb"/>
>>                 </tempUsage>
>>             </systemUsage>
>>         </systemUsage>
>>
>>         <transportConnectors>
>>             <transportConnector name="openwire"
>> uri="nio://0.0.0.0:61616?maximumConnections=1000&amp;wireFor
>> mat.maxInactivityDuration=180000&amp;wireFormat.maxFrameSize=104857600"/>
>>             <transportConnector name="mqtt+nio"
>> uri="mqtt+nio://0.0.0.0:1883?maximumConnections=50000&amp;wi
>> reFormat.maxInactivityDuration=180000&amp;wireFormat.
>> maxFrameSize=104857600"/>
>>         </transportConnectors>
>>
>>         <plugins>
>>             <discardingDLQBrokerPlugin dropAll="true"
>> dropTemporaryTopics="true" dropTemporaryQueues="true" />
>>             <timeStampingBrokerPlugin ttlCeiling="43200000"
>> zeroExpirationOverride="43200000"/>
>>         </plugins>
>>
>> </broker>
>>
>> TIA,
>> Shobhana
>>
>>
>>
>> --
>> View this message in context: http://activemq.2283324.n4.nab
>> ble.com/ActiveMQ-broker-becomes-unresponsive-after-sometime-
>> tp4725278.html
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>
>

Reply via email to