@ harry143, not easily. Do you have a test case you can share...
would like to get to the bottom of this but it would be great to have
some shared code that correctly captures the use case.
Something in Junit would be ideal.
On 21 December 2011 16:23, harry143 wrote:
> Yea i tried With optimized
Yea i tried With optimizedDispatch="false" as well as "true".But it really
did not affect much.
The problem of degradation in producer throughput remained.
@gary : hey you mentioned in your previous post that producer and consumer
share a common resource "consumer dispatch queue " . Is it also the
Gary,
I moved on to HornetQ as our underlying transport technology after we were
not able to debug/fix this particular issue. I tried looking into the code
etc.
Best
Bhupesh
On Wed, Dec 21, 2011 at 2:20 AM, Gary Tully [via ActiveMQ] <
ml-node+s2283324n4221130...@n4.nabble.com> wrote:
> @Bhupes
@Bhupesh,
the prefetch may be part of the problem, as by default the broker will
try and dispatch 1000 messages to each consumer. If the consumer
(stomp connection) is short lived, this is a waste of resources.
A consumers acks, will contend message production to some extent, this
is expected as t
I am facing the similar problem.
Whenever my consumer goes down and there is a data back log the production
rate goes down significantly.
Moreover when my consumer is up again , the production rate goes down again
and this cycle goes on.
Some Info :
I am using Producer flow control = FALSE , Async
Hey Folks,
I tried the concurrentStoreAndDispatchQueues="false" and it didn't help. I
still see around 10X drop in producer throughput with backlog.
1 Queue , 8 producers, 2 consumers , No backlog : 1200 QPS (producer), 1200
QPS (consumer)
1 Queue, 8 Producer, 2 consumer, 4GB backlog (2M events)
Hi Gary,
We also have observed this problem, when the backlog piles up (e.g. for some
reason consumers are disconnected, like network outage) producers as well
slows down, even when producer flow control is disabled, send is
asynchronous.
Thanks and regards
Kaustubh
On Tue, Sep 13, 2011 at 3:18 A
Hey Gary,
I will try to write a testcase but based on my Jprofile it looks to me
contention is for write lock due to removeMessages() calls after they
receive the ack from the client side and the incoming producer messages.
I am going to play with producer-flow-control settings and other
configur
I also noticed this problem. When there is high throughput and consumers
get bogged down working in between messages, they eventually get dropped
and must re-open a connection or they will stop receiving messages.
The problem with that is that consumers will have to actively monitor
their connect
on the results of your jprobe profiling, it would be good to identify
if there is a real contention problem there.
If you can generate a simple junit test case that demonstrates the
behavior you are seeing, please open a jira issue and we can
investigate some more.
A test case will help focus the a
for the queue case, with backlogs (when the consumers don't keep up)
you may want to experiment with
On 12 September 2011 01:08, bbansal wrote:
> Hello folks,
>
> I am evaluating ActiveMQ for some simple scenarios. The web-server will push
> notifications to the queue/topic to be consumed by on
This should be fine. By default this will use a store cursor which can handle
the overflow up to your storeLimit. As long as you are using either a store
cursor or file cursor you can overflow messages on the broker to the message
store or temp disk storage - just take care to not use vm curso
Thanks,
I think I have disabled producer flow control in my config as
Is this sufficient or I nee
http://activemq.apache.org/producer-flow-control.html
On Sep 11, 2011, at 6:08 PM, bbansal wrote:
> Hello folks,
>
> I am evaluating ActiveMQ for some simple scenarios. The web-server will push
> notifications to the queue/topic to be consumed by one or many consumers.
> The one requirement is
14 matches
Mail list logo