thanks for closing the loop on this one :-)
On 3 March 2011 13:48, robert.sl...@misys.com wrote:
> Just to note that I was able to resolve this issue by changing our code to
> maintain a single connection for the multiple concurrent processing threads.
> The slow down seems to be caused by contin
Just to note that I was able to resolve this issue by changing our code to
maintain a single connection for the multiple concurrent processing threads.
The slow down seems to be caused by continuous opening and closing of queue
connections to ActiveMQ.
--
View this message in context:
http://acti
I took a look at the XAPooledConnectionFactory, but I am struggling to
understand how I could use this when connecting to the broker from a
glassfish container via the activemq jca resource adapter. Is this
configurable within the resource adapter or jca connection pool somehow?
--
View this mess
here is the ra.xml I use:
http://activemq.2283324.n4.nabble.com/file/n3090748/ra.xml ra.xml
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Slow-throughput-after-several-hundred-messages-tp3082431p3090748.html
Sent from the ActiveMQ - User mailing list archive at Nabble
I am using a prefetch of 0. This has been configured in the ra.xml in the
activemq resource adapter we use to connect up to glassfish. I configured
this by adding "&jms.prefetchPolicy.all=0" to the connection url. Just
for good measure I also added the following into the ra.xml as well:
what prefetch value are you using?
http://activemq.apache.org/what-is-the-prefetch-limit-for.html
On 16 December 2010 11:08, robert.sl...@misys.com
wrote:
>
> Thanks Reynald, that utility could be useful. I think I have already pinned
> down this problem to be opening and closing of connections t
Thanks Reynald, that utility could be useful. I think I have already pinned
down this problem to be opening and closing of connections to the broker. If
I share a single or a couple of connections for all processing, I do not get
the broker slowdown issue. Unfortunately, our application runs in Gl
Here is a timeline distribution of the GC activity. The period is quite short
(a little bit more that 3 minutes) so it is hard to deduce anything, but here
we can see that there is a period of 30 seconds where the GC activity is more
intense, between 45 and 75 seconds. But all full GC takes ar
You can use the activemq pooled connection factory to make pooling
transparent to your session beans for message production, it should be
a case of swapping ActiveMQConnectionFactory with
PooledConnectionFactory where ever the jms resource is configured.
Alternatively, for short lived consumers, u
I believe the cause of the EOFException and the blocking behaviour and
slowdown is caused by creating and closing connections for each read/write.
In the standalone j2se application I attached in an earlier post, each
thread creates a new connection for each request before closing down the
resourc
What I meant is; I see some messages allright, but durable topics are
no more durable and this is a showstopper for me.
Gary Tully
writes:
> hmmm, just had a peek at the memory topic store, and it uses an lru
> cache for the message map with a limit of 100, so it is not ideal for
> your use cas
hmmm, just had a peek at the memory topic store, and it uses an lru
cache for the message map with a limit of 100, so it is not ideal for
your use case. But you should see some messages with that impl.
Extending the memory store to change that is one option.
On 14 December 2010 16:18, Aleksandar I
Not really. Just tested on 5.4.1-fuse and messages are gone with the
wind ;)
Gary Tully
writes:
> with the caveat that there is no durability of the message. But yes,
> when a durable sub is offline, the persistent messages will be held in
> memory.
>
> On 14 December 2010 15:38, Aleksandar Iva
with the caveat that there is no durability of the message. But yes,
when a durable sub is offline, the persistent messages will be held in
memory.
On 14 December 2010 15:38, Aleksandar Ivanisevic
wrote:
> Gary Tully
> writes:
>
>> if you set persistent=false on BrokerService, (broker in xml con
Gary Tully
writes:
> if you set persistent=false on BrokerService, (broker in xml config)
> it will use an in memory store.
And durable topics will keep working as they should?
>
> On 14 December 2010 10:01, Aleksandar Ivanisevic
> wrote:
>> Gary Tully
>> writes:
>>
>>> Non persistent message
Here is the gc.log and activemq logs, once again taken from a run where I
allowed it to process, block and then continue processing:
http://activemq.2283324.n4.nabble.com/file/n3087092/logs_verbose_gc.rar
logs_verbose_gc.rar
--
View this message in context:
http://activemq.2283324.n4.nabble.c
if you set persistent=false on BrokerService, (broker in xml config)
it will use an in memory store.
On 14 December 2010 10:01, Aleksandar Ivanisevic
wrote:
> Gary Tully
> writes:
>
>> Non persistent messages can be sent to a durable sub, but the durable
>> sub will only get the messages if it c
Hi,
There is no GC activity logs in the file you sent. How did you enable
verbose GC logging?
Try to use the following parameters:
-verbose:gc -XX:+PrintGC -XX:+PrintGCTimeStamps -Xloggc:gc.log
It will store all GC related logs in the gc.log file, more convenient
for analysis.
Regards,
Reynald
In order to try to work out just what is happening during these blocking
periods, I have used visual vm to perform cpu profiling, take a heapdump and
a thread dump when the broker becomes unresponsive. Here are the results:
http://activemq.2283324.n4.nabble.com/file/n3086836/visual_vm_analysis_du
Here is the activemq logs from a test run where I allowed it to begin
processing, block once and waited for it to resume processing, where I have
turned on verbose garbage collection output:
http://activemq.2283324.n4.nabble.com/file/n3086805/logs_with_verbose_gc.rar
logs_with_verbose_gc.rar
--
I have left the test application running over night, it is still consuming
and producing messages but is blocking very often now (every 15-20 seconds
for around 10 seconds each time).
I used visualVM to analyse the garbage collection, which seems to be running
around once every two minutes or so
Gary Tully
writes:
> Non persistent messages can be sent to a durable sub, but the durable
> sub will only get the messages if it connected, a backlog will not be
> retained as the messages will not be stored
> It will behave like a regular topic subscription in this regard.
Thanks, thats what
Non persistent messages can be sent to a durable sub, but the durable
sub will only get the messages if it connected, a backlog will not be
retained as the messages will not be stored
It will behave like a regular topic subscription in this regard.
On 14 December 2010 09:13, Aleksandar Ivanisevic
FWIW, I've nailed down my problem, it was a heavy I/O process running
on the host node, sucking out all the I/O from the broker that was
running in a (somewhat misconfigured) VM.
That being said, is it possible to have a "diskless" broker and still
have persistence?
All my messages are persisten
Hi guys,
Have you tried to enable GC logs to see if maybe a full GC is happening while
everything is slowed down?
In the past I encountered the same issue where for an unknown reason the GC
activity was suddenly high, making a full GC of 0.2 seconds every second. I
spotted this issue becau
I have tried each of the different cursor types with 5.4.1, but since
downloading the 5.4.2 version, I have not changed the cursor used, so this
is the default configuration, where no cursor is defined for queues - this
should result in the default store cursor being used.
I have just tried with
"robert.sl...@misys.com"
writes:
> Also, here is the broker configuration I used:
>
> http://activemq.2283324.n4.nabble.com/file/n3085451/activemq.xml
> activemq.xml
I see the cursors are commented out. Why?
Have you tried enableJournalDiskSyncs="false" within ?
I am having similar problems
Also, here is the broker configuration I used:
http://activemq.2283324.n4.nabble.com/file/n3085451/activemq.xml
activemq.xml
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Slow-throughput-after-several-hundred-messages-tp3082431p3085451.html
Sent from the ActiveMQ - Us
Heapdump following killing the test application and performing GC:
http://activemq.2283324.n4.nabble.com/file/n3085363/activemq-heapdump-after-test.rar
activemq-heapdump-after-test.rar
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Slow-throughput-after-several-hundred-
The activemq heap dump during execution of the sample test case:
http://activemq.2283324.n4.nabble.com/file/n3085361/activemq-heapdump-during-test-after-5-mins.rar
activemq-heapdump-during-test-after-5-mins.rar
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Slow-throug
Here is the sample netbeans project that I have used to highlight the
problem:
http://activemq.2283324.n4.nabble.com/file/n3085358/StandaloneJMSWriteConsume.rar
StandaloneJMSWriteConsume.rar
It is very crude, simply run the ConcurrentJMSWriteConsumeJ2SETest class and
it will continue to write a
I have attached a compressed rar file containing the broker debug trace:
http://activemq.2283324.n4.nabble.com/file/n3085354/logs.rar logs.rar
Due to attachment file size limits, I will attempt in separate posts to add
the broker configuration and crude netbeans project I used to show this
beh
I have just downloaded version 5.4.2 and run the same standalone j2se test
case I developed to highlight this issue against this version, but I am
still experiencing the same behaviour as with 5.4.1.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Slow-throughput-after-se
33 matches
Mail list logo