[ https://issues.apache.org/jira/browse/KAFKA-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15004639#comment-15004639 ]
James Cheng commented on KAFKA-2500: ------------------------------------ [~hachikuji] Thanks for the status update and the patch! Too bad it didn't make it in, but I understand that we want to get it right before making the change. We'll look and see if we want to apply the patch. From looking at the patch, it looks like it only affects the client libraries. So we could use a kafka client (with this patch applied) against a released 0.9.0 kafka broker? > Make logEndOffset available in the 0.8.3 Consumer > ------------------------------------------------- > > Key: KAFKA-2500 > URL: https://issues.apache.org/jira/browse/KAFKA-2500 > Project: Kafka > Issue Type: Sub-task > Components: consumer > Affects Versions: 0.9.0.0 > Reporter: Will Funnell > Assignee: Jason Gustafson > Priority: Critical > Fix For: 0.9.0.0 > > > Originally created in the old consumer here: > https://issues.apache.org/jira/browse/KAFKA-1977 > The requirement is to create a snapshot from the Kafka topic but NOT do > continual reads after that point. For example you might be creating a backup > of the data to a file. > This ticket covers the addition of the functionality to the new consumer. > In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps > was to expose the high watermark, as maxEndOffset, from the FetchResponse > object through to each MessageAndMetadata object in order to be aware when > the consumer has reached the end of each partition. > The submitted patch achieves this by adding the maxEndOffset to the > PartitionTopicInfo, which is updated when a new message arrives in the > ConsumerFetcherThread and then exposed in MessageAndMetadata. > See here for discussion: > http://search-hadoop.com/m/4TaT4TpJy71 -- This message was sent by Atlassian JIRA (v6.3.4#6332)