Thanks for info.  I will have to tune the memory. What else do you
recommend for High level Consumer for optimal performance and drain as
quickly as possible with auto commit on ?

Thanks,

Bhavesh

On Tue, Nov 4, 2014 at 9:59 AM, Joel Koshy <jjkosh...@gmail.com> wrote:

> We used to default to 10, but two should be sufficient. There is
> little reason to buffer more than that. If you increase it to 2000 you
> will most likely run into memory issues. E.g., if your fetch size is
> 1MB you would enqueue 1MB*2000 chunks in each queue.
>
> On Tue, Nov 04, 2014 at 09:05:44AM -0800, Bhavesh Mistry wrote:
> > Hi Kafka Dev Team,
> >
> > It seems that Maximum buffer size is set to  2 default.  What is impact
> of
> > changing this to 2000 or so ?   This will improve the consumer thread
> > performance ?  More event will be buffered in memory.  Or Is there any
> > other recommendation to tune High Level Consumers ?
> >
> > Here is code from Kafka Trunk Branch:
> >
> >   val MaxQueuedChunks = 2
> >   /** max number of message chunks buffered for consumption, each chunk
> can
> > be up to fetch.message.max.bytes*/
> >   val queuedMaxMessages = props.getInt("queued.max.message.chunks",
> > MaxQueuedChunks)
> >
> >
> >
> > Thanks,
> >
> > Bhavesh
>
>

Reply via email to