[ https://issues.apache.org/jira/browse/KAFKA-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857996#comment-15857996 ]
ASF GitHub Bot commented on KAFKA-4741: --------------------------------------- Github user asfgit closed the pull request at: https://github.com/apache/kafka/pull/2509 > Memory leak in RecordAccumulator.append > --------------------------------------- > > Key: KAFKA-4741 > URL: https://issues.apache.org/jira/browse/KAFKA-4741 > Project: Kafka > Issue Type: Bug > Components: clients > Reporter: Satish Duggana > Fix For: 0.10.3.0 > > > RecordAccumulator creates a `ByteBuffer` from free memory pool. This should > be deallocated when invocations encounter an exception or throwing any > exceptions. > I added todo comment lines in the below code for cases to deallocate that > buffer. > {code:title=RecordProducer.java|borderStyle=solid} > ByteBuffer buffer = free.allocate(size, maxTimeToBlock); > synchronized (dq) { > // Need to check if producer is closed again after grabbing > the dequeue lock. > if (closed) > // todo buffer should be cleared. > throw new IllegalStateException("Cannot send after the > producer is closed."); > // todo buffer should be cleared up when tryAppend throws an > Exception > RecordAppendResult appendResult = tryAppend(timestamp, key, > value, callback, dq); > if (appendResult != null) { > // Somebody else found us a batch, return the one we > waited for! Hopefully this doesn't happen often... > free.deallocate(buffer); > return appendResult; > } > {code} > I will raise PR for the same soon. -- This message was sent by Atlassian JIRA (v6.3.15#6346)