[ https://issues.apache.org/jira/browse/KAFKA-703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14122043#comment-14122043 ]
Guozhang Wang commented on KAFKA-703: ------------------------------------- This problem is resolved in the purgatory / API redesign: KAFKA-1583. Closing now. > A fetch request in Fetch Purgatory can double count the bytes from the same > delayed produce request > --------------------------------------------------------------------------------------------------- > > Key: KAFKA-703 > URL: https://issues.apache.org/jira/browse/KAFKA-703 > Project: Kafka > Issue Type: Bug > Components: purgatory > Affects Versions: 0.8.1 > Reporter: Sriram Subramanian > Assignee: Sriram Subramanian > Priority: Blocker > Fix For: 0.8.2 > > > When a producer request is handled, the fetch purgatory is checked to ensure > any fetch requests are satisfied. When the produce request is satisfied we do > the check again and if the same fetch request was still in the fetch > purgatory it would end up double counting the bytes received. > Possible Solutions > 1. In the delayed produce request case, do the check only after the produce > request is satisfied. This could potentially delay the fetch request from > being satisfied. > 2. Remove dependency of fetch request on produce request and just look at the > last logical log offset (which should mostly be cached). This would need the > replica.fetch.min.bytes to be number of messages rather than bytes. This also > helps KAFKA-671 in that we would no longer need to pass the ProduceRequest > object to the producer purgatory and hence not have to consume any memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)