I actually meant to say that you typically don't need to bump up the
queued chunk setting - you can profile your consumer to see if
significant time is being spent waiting on dequeuing from the chunk
queues.
If you happen to have a consumer consuming from a remote data center,
then you should cons
Thanks for info. I will have to tune the memory. What else do you
recommend for High level Consumer for optimal performance and drain as
quickly as possible with auto commit on ?
Thanks,
Bhavesh
On Tue, Nov 4, 2014 at 9:59 AM, Joel Koshy wrote:
> We used to default to 10, but two should be su
We used to default to 10, but two should be sufficient. There is
little reason to buffer more than that. If you increase it to 2000 you
will most likely run into memory issues. E.g., if your fetch size is
1MB you would enqueue 1MB*2000 chunks in each queue.
On Tue, Nov 04, 2014 at 09:05:44AM -0800
Hi Kafka Dev Team,
It seems that Maximum buffer size is set to 2 default. What is impact of
changing this to 2000 or so ? This will improve the consumer thread
performance ? More event will be buffered in memory. Or Is there any
other recommendation to tune High Level Consumers ?
Here is co