[ 
https://issues.apache.org/jira/browse/KAFKA-2045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383344#comment-14383344
 ] 

Jay Kreps commented on KAFKA-2045:
----------------------------------

Hey [~rzidane], one statically allocated ByteBuffer per node is theoretically 
sufficient sufficient once we have some limit on the response size. It won't be 
a trivial change though as that reuse will have to go through the NetworkClient 
and Selector layers so it will require careful design if we attempt it.

Currently there are two main memory uses: we allocate the big ByteBuffer 
allocations we do for the responses, then we parse these into many many 
ConsumerRecord instances which are stored internally until we can give them out 
to the user. Not sure which is worse the big chunks or the umpteen little 
records.
 
I suspect the ConsumerRecord allocation would be addressed by KAFKA-1895 but 
that would actually complicate ByteBuffer reuse since now we would hand these 
buffers out to the user. You could potentially implement both but you would 
need to change consumer.poll to allow passing back in the ConsumerRecords 
instance for reuse when you are done with it.

> Memory Management on the consumer
> ---------------------------------
>
>                 Key: KAFKA-2045
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2045
>             Project: Kafka
>          Issue Type: Sub-task
>            Reporter: Guozhang Wang
>
> We need to add the memory management on the new consumer like we did in the 
> new producer. This would probably include:
> 1. byte buffer re-usage for fetch response partition data.
> 2. byte buffer re-usage for on-the-fly de-compression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to