[ 
https://issues.apache.org/jira/browse/KAFKA-2045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384544#comment-14384544
 ] 

Rajiv Kurian commented on KAFKA-2045:
-------------------------------------

[~jkreps] the simple pool of ByteBuffers definitely sounds like an easier thing 
to start out with. Like you said a nice thing that a single buffer offers is 
absolute memory bounds, but I am sure there are other ways to tackle that. I 
could just have a setting for highest number of concurrent requests which is 
equal to the highest number of concurrent buffers per broker. We can then 
create buffers lazily (up to the max) and rotate between them in order. So for 
3 buffers we could go 0->1->2->0 etc. The consumer would still have an index 
into this pool as would the network producer. The network producer will not be 
able to re-use a response buffer that is still being iterated upon so the 
consumption of a response cannot be delayed forever without causing poll calls 
to run out of buffers and just return empty iterators.

Your proposed API for ConsumerRecords reuse sounds fine.

This gives me enough to work on a prototype, which I hope I can do soon with 
permission from the bosses.

> Memory Management on the consumer
> ---------------------------------
>
>                 Key: KAFKA-2045
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2045
>             Project: Kafka
>          Issue Type: Sub-task
>            Reporter: Guozhang Wang
>
> We need to add the memory management on the new consumer like we did in the 
> new producer. This would probably include:
> 1. byte buffer re-usage for fetch response partition data.
> 2. byte buffer re-usage for on-the-fly de-compression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to