[ https://issues.apache.org/jira/browse/KAFKA-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14257271#comment-14257271 ]
Neha Narkhede commented on KAFKA-727: ------------------------------------- [~lokeshbirla] Again, pasting [~junrao]'s comment here again bq. Is there an easy way to reproduce this issue? What all of us are looking for is steps (a reproducible test case) that we can run through on trunk, to see the same problem and error you do. > broker can still expose uncommitted data to a consumer > ------------------------------------------------------ > > Key: KAFKA-727 > URL: https://issues.apache.org/jira/browse/KAFKA-727 > Project: Kafka > Issue Type: Bug > Components: core > Affects Versions: 0.8.0 > Reporter: Jun Rao > Assignee: Jay Kreps > Priority: Blocker > Labels: p1 > Attachments: KAFKA-727-v1.patch > > > Even after kafka-698 is fixed, we still see consumer clients occasionally see > uncommitted data. The following is how this can happen. > 1. In Log.read(), we pass in startOffset < HW and maxOffset = HW. > 2. Then we call LogSegment.read(), in which we call translateOffset on the > maxOffset. The offset doesn't exist and translateOffset returns null. > 3. Continue in LogSegment.read(), we then call messageSet.sizeInBytes() to > fetch and return the data. > What can happen is that between step 2 and step 3, a new message is appended > to the log and is not committed yet. Now, we have exposed uncommitted data to > the client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)