Hey Bhavesh,

Here is what your configuration means
buffer.memory=64MB # This means don't use more than 64MB of memory
batch.size=1MB # This means allocate a 1MB buffer for each partition with
data
block.on.buffer.full=false # This means immediately throw an exception if
there is not enough memory to create a new buffer

Not sure what linger time you have set.

So what you see makes sense. If you have 1MB buffers and 32 partitions then
you will have approximately 32MB of memory in use (actually a bit more than
this since one buffer will be filling while another is sending). If you
have 128 partitions then you will try to use 128MB, and since you have
configured the producer to fail when you reach 64 (rather than waiting for
memory to become available) that is what happens.

I suspect if you want a smaller batch size. More than 64k is usually not
going to help throughput.

-Jay

On Tue, Nov 4, 2014 at 11:39 AM, Bhavesh Mistry <mistry.p.bhav...@gmail.com>
wrote:

> Hi Kafka Dev,
>
> With new Producer, we are having to change the # partitions for a topic,
> and we face this issue BufferExhaustedException.
>
> Here is example,   we have set 64MiB and 32 partitions and 1MiB of batch
> size.  But when we increase the partition to 128, it throws
> BufferExhaustedException right way (non key based message).  Buffer is
> allocated based on batch.size.  This is very common need to set auto
> calculate batch size when partitions increase because we have about ~5000
> boxes and it is not practical to deploy code in all machines than expand
> partition for  scalability purpose.   What are options available while new
> producer is running and partition needs to increase and not enough buffer
> to allocate batch size for additional partition ?
>
> buffer.memory=64MiB
> batch.size=1MiB
> block.on.buffer.full=false
>
>
> Thanks,
>
> Bhavesh
>

Reply via email to