[
https://issues.apache.org/jira/browse/KAFKA-2758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16240796#comment-16240796
]
Jeff Widman edited comment on KAFKA-2758 at 11/6/17 7:59 PM:
-------------------------------------------------------------
item 1 would be significantly more useful if KIP-211
([https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets])
gets accepted. That would remove the risk of accidentally expiring a consumers
offsets.
was (Author: jeffwidman):
item 1 would be significantly more useful if KIP-211
([https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets])
gets accepted. That would remove the risk off accidentally expiring a
consumers offsets.
> Improve Offset Commit Behavior
> ------------------------------
>
> Key: KAFKA-2758
> URL: https://issues.apache.org/jira/browse/KAFKA-2758
> Project: Kafka
> Issue Type: Improvement
> Components: consumer
> Reporter: Guozhang Wang
> Labels: newbiee, reliability
>
> There are two scenarios of offset committing that we can improve:
> 1) we can filter the partitions whose committed offset is equal to the
> consumed offset, meaning there is no new consumed messages from this
> partition and hence we do not need to include this partition in the commit
> request.
> 2) we can make a commit request right after resetting to a fetch / consume
> position either according to the reset policy (e.g. on consumer starting up,
> or handling of out of range offset, etc), or through the {code} seek {code}
> so that if the consumer fails right after these event, upon recovery it can
> restarts from the reset position instead of resetting again: this can lead
> to, for example, data loss if we use "largest" as reset policy while there
> are new messages coming to the fetching partitions.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)