This could be for various reasons:

1) Your consumer.property settings - if you have not been acknowledging
automatically, you need to provide a sufficient polling time and commit in
sync/async.
2) You are not consuming the messages how you think.

I don't know how you got this buffer.memory property. Doesn't sound right,
could you kindly check this again? Also, could you please provide a snippet
of your Consumer and how you are reading from the stream?

By default, the buffer is about 10% of the message.max.bytes. Perhaps you
are looking for a Producer tuning by using the following:

batch.size
message.max.bytes
send.buffer.bytes
Cloudtera and Confluent.io have some nice articles on Kafka. Have a read
through this
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_performance.html



On 23 May 2017 at 20:09, Milind Vaidya <kava...@gmail.com> wrote:

> I have set the producer properties as follows (0.10.0.0)
>
> *"linger.ms <http://linger.ms>"** : **"500"** ,*
>
>  *"batch.size"** : **"1000"**,*
>
> *"buffer.memory"** :**"**10000**"**,*
>
>  *"send.buffer.bytes"** : **"512000"*
>
> *and default *
>
> * max.request.size = *1048576
>
>
>  If records are sent faster than they can be delivered, they will be
> buffered. Now with buffer.memory having *10000 *bytes value, if a record
> has
>  more size than this what will happen ? say 11629 bytes in size. What is
> the minimum value of buffer.memory in terms of other params ? Should it be
> atleast equal to *send.buffer.bytes or **max.request.size or* better left
> to default which is 33554432 ?
>
> I am trying to debug some events not reaching consumer, so wondering if
> this could be the reason.
>

Reply via email to