@Bhupesh,
the prefetch may be part of the problem, as by default the broker will
try and dispatch 1000 messages to each consumer. If the consumer
(stomp connection) is short lived, this is a waste of resources.

A consumers acks, will contend message production to some extent, this
is expected as they share a resource, the consumer dispatch queue.
Batching acks either using client ack mode or transactions helps
reduce the overhead.

If you have not yet tried a 5.6-SNAPSHOT, can you verify it behaves the same.


On 13 September 2011 08:54, bbansal <bhup...@groupon.com> wrote:
> Hey Folks,
>
> I tried the concurrentStoreAndDispatchQueues="false" and it didn't help. I
> still see around 10X drop in producer throughput with backlog.
>
> 1 Queue , 8 producers, 2 consumers , No backlog : 1200 QPS (producer), 1200
> QPS (consumer)
> 1 Queue, 8 Producer, 2 consumer, 4GB backlog (2M events): 120 QPS
> (producer), 1200 QPS (consumer)
>
> I am attaching the scripts I am using, unfortunately I am using stomp and a
> perl based consumer/producer setup.
>
> Best
> Bhupesh
> http://activemq.2283324.n4.nabble.com/file/n3809392/testcase.tar.gz
> testcase.tar.gz
> http://activemq.2283324.n4.nabble.com/file/n3809392/activemq-stomp.xml
> activemq-stomp.xml
>
>
>
> --
> View this message in context: 
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3809392.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.



-- 
http://fusesource.com
http://blog.garytully.com

Reply via email to