Thanks for all your replies.
Jun, perhaps you are right. There seems to be no other choice but to use
more brokers,
at least for the current version.
Jay, you've got what I mean. And thanks for sharing your approach, though
we are under
different circumstances.
Magnus, as you said, my consumer p
Sounds to me your consumer performance is the problem, not the producer.
So either make your consumers faster or make them consume but drop messages
to keep up with the producer speed.
This also gives you some means of keeping track of how many messages are
lost, why, and when.
And just for refere
I think what you are asking for is backpressure from the broker to the
producer. I.e. when the broker got close to full it would start to slow
down the producer to let the consumer catch up. This is a fairly typical
thing for a message broker to do.
Our approach is different though. We have found
You can use more brokers. Another thing is to enable compression in the
producer, if you haven't done so.
Thanks,
Jun
On Wed, Dec 11, 2013 at 11:42 PM, xingcan wrote:
> Guozhang,
>
> Thanks for your prompt replay. I got two 300GB SAS disks for each borker.
> At peak time, the produce speed fo
Guozhang,
Thanks for your prompt replay. I got two 300GB SAS disks for each borker.
At peak time, the produce speed for each broker is about 70MB/s. Apparently,
this speed is already restricted by network. While, the consume speed is
lower
for some topics are consumed by more than one group. Under
One possible approach is to change the retention policy on broker.
How large your messages can accumulate on brokers at peak time?
Guozhang
On Wed, Dec 11, 2013 at 9:09 PM, xingcan wrote:
> Hi,
>
> In my application, the produce speed could be very high at some specific
> time in a day while