Yes, as long as that's on the subscription, not the topic itself.  Caveat:
I haven't actually looked at JMX in 5.3.2, so I'm extrapolating from what
I've seen in later versions, but I think that should be right.
On Aug 24, 2015 1:29 PM, "Daniel Israel" <disr...@liveops.com> wrote:

>
>
>
>
>
> On 8/21/15, 5:28 AM, "tbai...@gmail.com on behalf of Tim Bain" <
> tbai...@gmail.com on behalf of tb...@alumni.duke.edu> wrote:
>
> >You can tell if a consumer is slower than the rest via JMX.  Look at each
> >subscription for the topic and see if any of them has lots of pending
> >messages while the rest do not.
>
> Is that the PendingQueueSize on the JMX console?
>
> Thanks.
>
>
> >If you discover that to be the case,
> >configure a slow consumer abortion strategy to kick them off and throw
> away
> >their pending messages.  (Note that this only works for topics, and only
> >for non-durable subscriptions, but I think both of those apply to your
> >scenario.)
> >On Aug 21, 2015 12:30 AM, "Daniel Israel" <disr...@liveops.com> wrote:
> >
> >>
> >>
> >>
> >>
> >>
> >> On 8/20/15, 9:56 PM, "tbai...@gmail.com on behalf of Tim Bain" <
> >> tbai...@gmail.com on behalf of tb...@alumni.duke.edu> wrote:
> >>
> >> >The broker can't discard the message till all subscribers have consumed
> >> it,
> >>
> >> This is good information, thanks!  I am wondering if there is a way to
> >> detect this?
> >>
> >> >so a single slow consumer will result in increased memory/store usage
> and
> >> >eventually PFC.  You could set up a slow consumer abortion strategy if
> >> >you're worried about that possibility.
> >> >On Aug 20, 2015 10:13 AM, "Daniel Israel" <disr...@liveops.com> wrote:
> >> >
> >> >> Oooh, that's a good idea to add another subber.  I may be able to try
> >> that.
> >> >>
> >> >> Also, could something on the client side cause a backup?  Or will the
> >> >> broker just fall through if a client hangs?
> >> >>
> >> >>
> >> >>
> >> >> On 8/20/15, 7:18 AM, "tbai...@gmail.com on behalf of Tim Bain" <
> >> >> tbai...@gmail.com on behalf of tb...@alumni.duke.edu> wrote:
> >> >>
> >> >> >Since this is happening in production, I assume it's not acceptable
> to
> >> >> >attach a debugger and set a breakpoint on the line where the
> exception
> >> is
> >> >> >thrown.  If you could, that would let you step into the
> >> >> >MemoryUsage.isFull() call and see what's going on, but it'll hang
> the
> >> >> >broker when it happens and I assume that won't fly.
> >> >> >
> >> >> >You could probably figure out the size of the messages just by
> >> creating a
> >> >> >test app that just logs the size of each message it receives and
> >> having it
> >> >> >become another subscriber on the topic.  It won't tell you the rate
> at
> >> >> >which your consumers consume, but you'll know the input rates at
> least.
> >> >> >
> >> >> >If you're worried that your topics are filling up because your
> >> consumers
> >> >> >are falling behind, you could force them to disconnect (and throw
> away
> >> >> >their pending messages) if they get too far behind.
> >> >> >http://activemq.apache.org/slow-consumer-handling.html  Just make
> sure
> >> >> the
> >> >> >limit you set is smaller than the PFC per-destination limit, or
> you'll
> >> >> >never hit it.
> >> >> >
> >> >> >Tim
> >> >> >
> >> >> >On Wed, Aug 19, 2015 at 11:37 AM, Daniel Israel <
> disr...@liveops.com>
> >> >> wrote:
> >> >> >
> >> >> >>
> >> >> >> Forgot to attach.  This is log message:
> >> >> >>
> >> >> >> 2013-03-30 22:34:42,824 [.96.47.33:34886] WARN  Service
> >> >> >>     - Async error occurred: javax.jms.ResourceAllocationException:
> >> Usage
> >> >> >> Manager memory limit reached
> >> >> >> javax.jms.ResourceAllocationException: Usage Manager memory limit
> >> >> reached
> >> >> >>     at
> org.apache.activemq.broker.region.Topic.send(Topic.java:293)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.broker.region.AbstractRegion.send(AbstractRegion.java:354)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.broker.region.RegionBroker.send(RegionBroker.java:443)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.broker.TransactionBroker.send(TransactionBroker.java:224)
> >> >> >>     at
> >> >> org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:126)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.broker.CompositeDestinationBroker.send(CompositeDestinationBroker.java:95)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:133)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.broker.TransportConnection.processMessage(TransportConnection.java:455)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.command.ActiveMQMessage.visit(ActiveMQMessage.java:639)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:308)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:182)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:68)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.transport.InactivityMonitor.onCommand(InactivityMonitor.java:210)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:84)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:203)
> >> >> >>     at
> >> >> >>
> >> >>
> >>
> org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:185)
> >> >> >>     at java.lang.Thread.run(Thread.java:662)
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> On 8/19/15, 6:53 AM, "tbai...@gmail.com on behalf of Tim Bain" <
> >> >> >> tbai...@gmail.com on behalf of tb...@alumni.duke.edu> wrote:
> >> >> >>
> >> >> >> >Hmm, the error messages I'm used to seeing (from 5.8.0 and
> 5.10.0)
> >> look
> >> >> >> >like the one in this page (
> >> >> >> >http://blogs.sourceallies.com/2014/10/activemq-memory-tuning/),
> >> which
> >> >> >> give
> >> >> >> >lots of information about what limit is being hit.  I guess that
> >> >> detailed
> >> >> >> >info must have been added in a version after the one you're
> using.
> >> >> What
> >> >> >> >version is that, anyway?
> >> >> >> >
> >> >> >> >Can you post the full log message?
> >> >> >> >
> >> >> >> >Have you explored the JMX tree to see if there is any info to
> tell
> >> you
> >> >> how
> >> >> >> >full your topics are?  Obviously what information is available
> will
> >> >> vary
> >> >> >> >based on what version you're running, so that might not help, but
> >> you
> >> >> >> >should definitely check.
> >> >> >> >On Aug 18, 2015 10:18 AM, "Daniel Israel" <disr...@liveops.com>
> >> wrote:
> >> >> >> >
> >> >> >> >>
> >> >> >> >> Hi Tim, thanks for response.
> >> >> >> >>
> >> >> >> >> Flow control is enabled, and it's configured to fail if out of
> >> >> memory.
> >> >> >> As
> >> >> >> >> noted below, the log lines in this version don't tell us which
> >> limit
> >> >> >> we're
> >> >> >> >> exceeding, so we're running half blind :(.  Knowing average
> topic
> >> >> size
> >> >> >> >> would be helpful, but having individual topic size would be
> good.
> >> >> Right
> >> >> >> >> now, looking at the producer side to see if there is some way
> to
> >> >> track.
> >> >> >> >>
> >> >> >> >> Raised the topic limit to 20mb and still had an issue.  It's
> very
> >> hit
> >> >> >> and
> >> >> >> >> miss.  I can run for a week without issue, then when I get hit
> >> hard,
> >> >> it
> >> >> >> >> falls over.
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> On 8/18/15, 5:58 AM, "tbai...@gmail.com on behalf of Tim
> Bain" <
> >> >> >> >> tbai...@gmail.com on behalf of tb...@alumni.duke.edu> wrote:
> >> >> >> >>
> >> >> >> >> >Later versions give a few addition stats (such as average
> message
> >> >> size)
> >> >> >> >> via
> >> >> >> >> >JMX, but that won't help you till that upgrade in production
> is
> >> >> >> complete.
> >> >> >> >> >
> >> >> >> >> >Do you have producer flow control enabled?  The error you're
> >> getting
> >> >> >> >> >doesn't match what I remember it being the last time I hit
> it, so
> >> >> I'm
> >> >> >> >> >assuming you don't.  PFC gives log lines that at least tell
> you
> >> >> exactly
> >> >> >> >> >which limit you ran into, plus it'll avoid losing any messages
> >> (but
> >> >> >> it'll
> >> >> >> >> >"hang" producers till messages are consumed), so you could
> enable
> >> >> it to
> >> >> >> >> >better understand what's going on.
> >> >> >> >> >
> >> >> >> >> >Tim
> >> >> >> >> >On Aug 17, 2015 1:45 PM, "Daniel Israel" <disr...@liveops.com
> >
> >> >> wrote:
> >> >> >> >> >
> >> >> >> >> >>
> >> >> >> >> >> Hello All,
> >> >> >> >> >>
> >> >> >> >> >> I am tracking down a memory issue in AMQ.  I am time to time
> >> >> seeing
> >> >> >> >> >> exceptions like this:
> >> >> >> >> >>
> >> >> >> >> >>
> >> >> >> >> >> Async error occurred: javax.jms.ResourceAllocationException:
> >> Usage
> >> >> >> >> Manager
> >> >> >> >> >> memory limit reached
> >> >> >> >> >>
> >> >> >> >> >>
> >> >> >> >> >> I can't tell if this is because I am exceeding the
> configured
> >> >> amount
> >> >> >> of
> >> >> >> >> >> memory in SystemUsage, or if I am exceeding the amount of
> >> memory
> >> >> >> >> configured
> >> >> >> >> >> per topic.
> >> >> >> >> >>
> >> >> >> >> >> I am using only Topics right now, and I had the memory limit
> >> set
> >> >> to
> >> >> >> >> 10mb.
> >> >> >> >> >> The error doesn't direct me in any direction for this.  I am
> >> >> using an
> >> >> >> >> old
> >> >> >> >> >> version of AMQ (First step was to request upgrade to latest
> >> >> version.
> >> >> >> >> It's
> >> >> >> >> >> in the works, but it might be a week or two before it's
> >> completed
> >> >> in
> >> >> >> >> >> production) and I see changes in the source that give more
> >> details
> >> >> >> when
> >> >> >> >> >> throwing this exception.  Is there some historical record or
> >> log
> >> >> of
> >> >> >> >> >> Topics?  What I'd really like is to be able to see how often
> >> each
> >> >> >> Topic
> >> >> >> >> >> gets and distributes a message and how big that message was.
> >> The
> >> >> >> >> dashboard
> >> >> >> >> >> and Jconsole give me some information, but because Topics
> are
> >> >> >> delivered
> >> >> >> >> >> then released, I don't have any information beyond how many
> >> were
> >> >> >> >> enqueued
> >> >> >> >> >> and delivered.
> >> >> >> >> >>
> >> >> >> >> >> Is there any such animal available that would help me with
> >> this?
> >> >> Or
> >> >> >> >> >> suggestions how to approach?  Any help is appreciated.
> Thanks.
> >> >> >> >> >>
> >> >> >> >> >>
> >> >> >>

Reply via email to