I'll be re-running my stress test and Monday and report my findings back to
this thread. Thanks! Your questions are thought provoking.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Stress-testing-OOME-tp4677959p4678010.html
Sent from the ActiveMQ - User mailing list ar
Thanks all for the comments - I will rerun my stress test on Monday and
report back. I do not think my url-based parameter were being used because
of pilot error: I was using a jndi.properties file (a practice I have now
abandoned) and the connection url was specified in that file, not in my
exter
Want to post the code for your consumer?
Is each consumer in its own JVM or are all consumers sharing the same JVM?
Sharing same Session/Connection?
On Thu, Feb 13, 2014 at 2:31 PM, jlindwall wrote:
> I am stress testing activemq 5.9.0 by sending a flood of non-persistent
> messages to a topic us
Wait a second. Is that 100 consumers all running in the same JVM?
Keep in mind that consumers of a Topic all get a copy of every message -
they are not load-balancing.
So, if the producers are sending 1000 msg/sec and there are 100 consumers,
that 10,000 msg/sec attempting to be consumed.
Still
First thing is finding that memory - if visualvm doesn't show much on the
HEAP, how about permgen?
Also, how about the CPU activity graph for GC? Does it spike?
Based on the description so far, there's no valid reason for the conusmer
JVM to be OOM unless its heap is just extremely small.
--
Thanks for the tip. Is this documentation still accurate?
http://activemq.apache.org/what-is-the-prefetch-limit-for.html
I tried setting the limit on the connection url but got the same behavior:
tcp://testapp01:61616?jms.prefetchPolicy.all=50
I also tried setting the limit on my dest
On 02/13/2014 04:31 PM, jlindwall wrote:
I am stress testing activemq 5.9.0 by sending a flood of non-persistent
messages to a topic using jmeter.
My producers are 50 threads attempting to deliver 1000 msgs/sec; each of
size 1K. There is merely a 1 millisecond delay in between message sends.
Th