Just to add more info   -

Our message size = 1.5MB , so effectively there was no batching as our
batch size is 200.

Is there any case when the RecordAccumulator can grow beyond the configured
buffer.memory ?

Regards,
Liju John

On Wed, Aug 5, 2015 at 12:41 PM, Liju John <lijubj...@gmail.com> wrote:

> Hi,
>
> We are experiencing an issue where the RecordAccumulator size is growing
> beyond the configured buffer.memory size .
>
> Below is the sequence of events when this issue occured -
>
> 1. One of the broker out of 10 brokers  was brought down at 15:10 PM
> 2.Till 17:11 PM (same date) the kafka producer instance was in healthy
> state
> 3. The same down broker was brought up at ~17:11PM ( same date)
> 4. Observed UnknownTopicOrPartitionException for some messages
> 5. HeapDump on  17:14 shows the RecordAccumulator size around ~2.9 GB way
> above the configured buffer.size =120MB
>
>
> Other Details  -
>
> kafka version = 0.8.2.1
>
> Below is the kafka producer properties configured  -
>
> kafka.bootstrap.servers=qa-vip-url
> kafka.acks=1
> kafka.buffer.memory=125829120
> kafka.compression.type=none
> kafka.retries=3
> kafka.batch.size=200
> kafka.client.id=service-instance-id
> kafka.linger.ms=0
> kafka.max.request.size=5242880
> kafka.receive.buffer.bytes=32768
> kafka.send.buffer.bytes=131072
> kafka.timeout.ms=3000
> kafka.block.on.buffer.full=false
> kafka.metadata.fetch.timeout.ms=60000
> kafka.metadata.max.age.ms=300000
> kafka.reconnect.backoff.ms=30
> kafka.retry.backoff.ms=3000
>
> kafka.key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
>
> kafka.value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
>
>
> Please help in identifing the root cause which is causing this memory leak
> for RecordAccumulator.
>
> Attached is the HeapDump overview for reference
>
> Let me know if you need any other information
>
> Regards,
> Liju John
>

Reply via email to