HI Jay,
Thanks for response. I feel this needs to be documented as limitation of
New Java Producer Batch size vs buffer size impact when increasing the
partition. I agree what you get fine grain control (which is great), but
ultimately loosing the functionality of increasing partition for
scalabi
Bhavesh,
Wouldn't using the default batch size of 16k have avoided this problem
entirely? I think the best solution now is just to change the
configuration. What I am saying is it is unlikely you will need to do this
again, the problem is just that 1MB partition batches are quite large so
you quic
Hi Jay or Kafka Dev Team,
Any suggestions, how I can deal with this situation of expanding
partitions for New Java Producer for scalability (consumer side) ?
Thanks,
Bhavesh
On Tue, Nov 4, 2014 at 7:08 PM, Bhavesh Mistry
wrote:
> Also, to added to this Old producer (Scala based in not impact
Also, to added to this Old producer (Scala based in not impacted by the
partition changes). So it is important scalability feature being taken way
if you do not plan for expansion from the beginning for New Java Producer.
So, New Java Producer is taking way this critical feature (unless plan).
Th
HI Jay,
Fundamental, problem is batch size is already configured and producers are
running in production with given configuration. ( Previous value were just
sample). How do we increase partitions for topics when batch size exceed
and configured buffer limit ? Yes, had we planed for batch size
Hey Bhavesh,
No there isn't such a setting. But what I am saying is that I don't think
you really need that feature. I think instead you can use a 32k batch size
with your 64M memory limit. This should mean you can have up up to 2048
batches in flight. Assuming one batch in flight and one being ad
Hi Jay,
I agree and understood what you have mentioned in previous email. But when
you have 5000+ producers running in cloud ( I am sure linkedin has many
more and need to increase partitions for scalability) then all running
producer will not send any data. So Is there any feature or setting tha
Hey Bhavesh,
Here is what your configuration means
buffer.memory=64MB # This means don't use more than 64MB of memory
batch.size=1MB # This means allocate a 1MB buffer for each partition with
data
block.on.buffer.full=false # This means immediately throw an exception if
there is not enough memory
Hi Kafka Dev,
With new Producer, we are having to change the # partitions for a topic,
and we face this issue BufferExhaustedException.
Here is example, we have set 64MiB and 32 partitions and 1MiB of batch
size. But when we increase the partition to 128, it throws
BufferExhaustedException rig