Wait a second. Is that 100 consumers all running in the same JVM?
Keep in mind that consumers of a Topic all get a copy of every message -
they are not load-balancing.
So, if the producers are sending 1000 msg/sec and there are 100 consumers,
that 10,000 msg/sec attempting to be consumed.
Still
First thing is finding that memory - if visualvm doesn't show much on the
HEAP, how about permgen?
Also, how about the CPU activity graph for GC? Does it spike?
Based on the description so far, there's no valid reason for the conusmer
JVM to be OOM unless its heap is just extremely small.
--
Thanks for the tip. Is this documentation still accurate?
http://activemq.apache.org/what-is-the-prefetch-limit-for.html
I tried setting the limit on the connection url but got the same behavior:
tcp://testapp01:61616?jms.prefetchPolicy.all=50
I also tried setting the limit on my dest
On 02/13/2014 04:31 PM, jlindwall wrote:
I am stress testing activemq 5.9.0 by sending a flood of non-persistent
messages to a topic using jmeter.
My producers are 50 threads attempting to deliver 1000 msgs/sec; each of
size 1K. There is merely a 1 millisecond delay in between message sends.
Th
I am stress testing activemq 5.9.0 by sending a flood of non-persistent
messages to a topic using jmeter.
My producers are 50 threads attempting to deliver 1000 msgs/sec; each of
size 1K. There is merely a 1 millisecond delay in between message sends.
There are 100 consumers on the topic. It run
On 02/13/2014 02:55 PM, jlindwall wrote:
I am looking to get automatic client connection retry for a single broker.
By that I mean if the client connection to the broker fails, the client will
retry that connection.
It seems like the failover: protocol can give me this right?
failover:(tcp:/
I am looking to get automatic client connection retry for a single broker.
By that I mean if the client connection to the broker fails, the client will
retry that connection.
It seems like the failover: protocol can give me this right?
failover:(tcp://server:61616)
The "normal" use case for f
Here's a page that tries to show how to setup startup destinations; it
doesn't render in the FireFox or Safari for me, but the show source can be
deciphered:
http://activemq.apache.org/configure-startup-destinations.html
Here's how to configure a startup destination:
Is that the exception from the attempt to access the queue via JMX, or
from the web console?
If it's from the web console, are you hitting the main Queue page, or
trying to hit the browse page for the DLQ? If the latter, then that won't
work as long as the Queue does not exist.
You can have the
that needs an enhancement and may be tricky to implement b/c network
connectors cannot operate properly over failover reconnects.
masterslave does not do any retries - it just picks from the url list,
and should respect randomize=false on the first connect and reconnect
(when initiated by the netwo
Sounds like a bug report I created a while back. Do teansacted sends still
happen asynchroniusly?
Sent from my iPhone
> On Feb 13, 2014, at 4:18 AM, "tabish...@gmail.com [via ActiveMQ]"
> wrote:
>
> On 02/13/2014 01:53 AM, Gangadhar Rao wrote:
>
> > We did a test recently with the latest CMS
Here's that Jira entry: https://issues.apache.org/jira/browse/AMQ-3166
Is there a setting to "always sync send" on the C++ client library? If so,
that may help with this problem; most likely blocking the producer when the
destination is full on the broker.
--
View this message in context:
htt
Is there an error message in the broker log?
Sent from my iPhone
> On Feb 13, 2014, at 4:29 AM, "sharma_arun_se [via ActiveMQ]"
> wrote:
>
> Agree with you. We have no messages in DLQ, when I restarted ActiveMQ
> service.
>
> But ActiveMQ web-console queue tab (http://localhost:8161/admin
Hello kal,
I am also trying to setup activemq with LevelDb. But having issues and
Activemq stops processing messages on queue. I can't figure out issue. Can
you share your settings/configuration for zookeeper and activemq so that I
can figure out what is going on.
thanks,
chirag
On Tue, Feb 11
are there any XA or distributed indoubt transaction in the mix? If so
each would have it's own mbean after a restart, hanging off the broker
mbean.
What is odd is that you still see the problem after a restart. Are the
data files something you could share. I think this needs some
debugging?
On 13
I forgot to mention that, I did check KahaDBPersistenceAdapter in jmx, but
the Transactions attribute is empty.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/kahadb-cleanup-problem-tp4677917p4677934.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
default thread pool in TaskRunnerFactory is initialized with a
corePoolSize=0, i.e. no idle thread should be kept after keepAliveTime
(works as coded)
I do not see a reason why you would want idle threads to be kept for a
longer period, especially if there are several TaskRunnerFactory instances
i
Can you please respond to my second question that why thread pool is not
keeping the threads?
Thanks,
Anuj
--
View this message in context:
http://activemq.2283324.n4.nabble.com/What-is-the-effect-of-dedicatedTaskRunner-Attribute-tp2363191p4677932.html
Sent from the ActiveMQ - User mailing li
Hey,
I want to increase checkpoint interval of kahadb to store the cache on disk.
By doing this can i increase my broker performance?
Does it has any side effect on broker or any other issues ?
Thanks,
Anuj
--
View this message in context:
http://activemq.2283324.n4.nabble.com/increase-Check
for some contrast, maybe peek at http://camel.apache.org/sjms.html -
which is a springless camel jms component
- parallel producers make sense for persistent messages b/c they
allow more writes to be batched broker side.
- receive vs message listener are very similar b/c the broker prefetch
is
I think this was addressed in https://issues.apache.org/jira/browse/AMQ-4205
defaults are unchanged, but timeouts are configurable now
--
View this message in context:
http://activemq.2283324.n4.nabble.com/What-is-the-effect-of-dedicatedTaskRunner-Attribute-tp2363191p4677929.html
Sent from the
can you peek at the kahadb bean via jconsole and access the
transactions attribute. It should have some detail on the pending tx.
On 13 February 2014 09:11, janhanse wrote:
> I see several similar others have much the same problem, but I have not found
> any way of fixing it so far. We are runnin
On 02/13/2014 01:53 AM, Gangadhar Rao wrote:
We did a test recently with the latest CMS CPP library.
Scenario:
To test the session commit functionality when queue memory is full on the
destination broker.
Setup:
We have written one client which acts like a producer and the activemq
session is o
I explored a little bit about it and found that in v5.6 this change was
done..
https://issues.apache.org/jira/browse/AMQ-3667
Again coming back to the issue, I am now using pooled connection factory for
my broker and seeing that this task thread is continuously getting created
and destroyed. For
Hi everyone!
Is there a simple way to upgrade hawtio to for example version 1.2.2?
If I replace the entire webapps/howtio/ with the new version of hawtio, I
get many InstanceAlreadyExistsException.
I am using ActiveMQ 5.9.0. and hawtio within Web container.
Thanks in advance.
Nader
--
Vi
Hi all,
I am currently rewriting the CXF jms transport to get rid of spring jms.
So two parts that I needed to replace are the JmsTemplate and the
DefaultMessageListenerContainer.
So the question is which features of these should I recreate and which
are not necessary. Some more concrete que
Hey, I am using ActiveMQ v5.8. AND I was surprised to see that *default value
of dedicatedTaskrunner is false which means ActiveMQ uses thread pool by
default.* which contradict the statement given by ActiveMQ vendors
(http://activemq.apache.org/how-do-i-configure-10s-of-1000s-of-queues-in-a-single
I see several similar others have much the same problem, but I have not found
any way of fixing it so far. We are running activemq 5.9.0 with medium load.
Our problem is that the number of kahadb log files is growing every day to
more than 700 files over a 3 months period.
>From the kahadb debug l
I had an issue after switching to non-blocking I/O (NIO) in combination with
SSL
the SSL handshake for ~5000 connections easily stalled the broker taking
100% CPU
I'm using version ActiveMQ 5.8
doing some profiling, it showed up that the SSL handshake on broker side
eats up ~90% of overall CPU t
29 matches
Mail list logo