[ 
https://issues.apache.org/jira/browse/KAFKA-671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566188#comment-13566188
 ] 

Jay Kreps commented on KAFKA-671:
---------------------------------

Took a look at this. Looks reasonable.

Other atrocities have occurred inside RequestQueue, but they aren't from this 
patch. Re-opened KAFKA-683. :-)

For maps it is nicer to import scala.collection and then refer to mutable.Map 
rather than scala.mutable.Map.

I think the question is why do we need to hang onto the ProduceRequest object 
at all? We are doing work to null it out, but why can't we just take the one or 
two fields we need from that in the delayed produce? If we do that then won't 
the producerequest be out of scope after handleProduce and get gc'd? Is the 
root cause of this the fact that we moved deserialization into the network 
thread and shoved the api object into the request?
                
> DelayedProduce requests should not hold full producer request data
> ------------------------------------------------------------------
>
>                 Key: KAFKA-671
>                 URL: https://issues.apache.org/jira/browse/KAFKA-671
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.8
>            Reporter: Joel Koshy
>            Assignee: Sriram Subramanian
>            Priority: Blocker
>              Labels: bugs, p1
>             Fix For: 0.8.1
>
>         Attachments: outOfMemFix-v1.patch, outOfMemFix-v2.patch, 
> outOfMemFix-v2-rebase.patch, outOfMemFix-v3.patch
>
>
> Per summary, this leads to unnecessary memory usage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to