Hi James,
   We are doing a stress test on ActiveMQ to see if it can be used for
production ready requirements. Our requirement is 3K mesages per second.
During the stess test we tried with small amount of messages. Hee is the
details. But after sending 130K messages it started giving outofmemory
errors. When we looked into the JMX console, we could see that the memory is
increasing gradually and never coming down. Is there a way we can force GC
to happen may be at an interval of 30 mins in the memory manager
configuration. Because the GC settings are already there in th JVM level as
-D options. But still the memory is growing to the amount we have set in the
memory manager element of broker-config.xml. Once it reaches the limit it
starts giving out of memory. Our m/cs are 8 GB 2 CPU Intel configurations
with heap size of 1024MB settings.  We are not setting any extra parameter
through the connection factory except copyOnMessageSend to false. It is
using default prefetch sizes. Persistent is on by default. DB is Oracle 10g
for pesistence without Journal. Shall we use Journal. If we use Journal can
we pesist the messages in  oracle database or it will always write the data
to a file storage. 

Could you please give some pointers to tackle this poblem. This is quite
important for us to take few strategic decision.  

Regards
-Sambit    

James.Strachan wrote:
> 
> Try turning off asyncSend. Also this might help when you've done that
> http://activemq.apache.org/my-producer-blocks.html
> 
> 
> 
> On 3/22/07, Po Cheung <[EMAIL PROTECTED]> wrote:
>>
>> Here is some more info if that would help.  We have four persistent
>> queues A,
>> B, C, and D.
>>
>> - Client 1 sends a message each to queue A, B, and C at an average rate
>> of
>> about 2 per second.
>> - Client 2 receives a message from queue A, processes it, and sends a
>> result
>> message to queue D.
>> - Client 3 receives a message from queue B, processes it, and sends  a
>> result message to queue D.
>> - Client 4 receives a message from queue C, processes it, and sends  a
>> result message to queue D.
>> - Client 1 receives and processes result messages from queue D on a
>> separate
>> thread.
>>
>> Client1: useAsyncSend=true
>>
>> Po
>>
>>
>> Po Cheung wrote:
>> >
>> >
>> > Po Cheung wrote:
>> >>
>> >> We got OutOfMemory errors on both the broker and client after
>> >> sending/receiving 600K messages to persistent queues with
>> transactions.
>> >> The memory graph below shows the heap usage of the broker growing
>> >> gradually.  Max heap size is 512MB.  UsageManager is also at 512MB
>> (will
>> >> that cause a problem?  should it be less than the max heap size?). 
>> When
>> >> JMS transaction is turned off, the heap usage never exceeds 10MB and
>> we
>> >> do not run out of memory.  There is no backlog in the queues so it
>> should
>> >> not be a fast producer, slow consumer issue.  Are there any known
>> issues
>> >> of memory leaks in ActiveMQ with transacted messages?
>> >>
>> >> Details:
>> >> - ActiveMQ 4.1.1 SNAPSHOT
>> >> - Kaha
>> >> - Default pre-fetch limit
>> >>
>> >> Po
>> >>
>> >  http://www.nabble.com/file/7328/ActiveMQHeap.gif
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Out-of-memory-errors-tf3443750s2354.html#a9618502
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>
>>
> 
> 
> -- 
> 
> James
> -------
> http://radio.weblogs.com/0112098/
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Out-of-memory-errors-tf3443750s2354.html#a11307579
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to