[ 
https://issues.apache.org/jira/browse/KAFKA-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14715459#comment-14715459
 ] 

Jason Gustafson commented on KAFKA-2478:
----------------------------------------

[~devstr] Not that I'm aware of. You can control the maximum fetch size in 
configuration, but that only affects individual fetches and poll() can send out 
many of these. Do you want to submit a patch for this?

> KafkaConsumer javadoc example seems wrong
> -----------------------------------------
>
>                 Key: KAFKA-2478
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2478
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions: 0.8.3
>            Reporter: Dmitry Stratiychuk
>            Assignee: Neha Narkhede
>
> I was looking at this KafkaConsumer example in the javadoc:
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L199
> As I understand, commit() method commits the maximum offsets returned by the 
> most recent invocation of poll() method.
> In this example, there's a danger of losing the data.
> Imagine the case where 300 records are returned by consumer.poll()
> The commit will happen after inserting 200 records into the database.
> But it will also commit the offsets for 100 records that are still 
> unprocessed.
> So if consumer fails before buffer is dumped into the database again,
> then those 100 records will never be processed.
> If I'm wrong, could you please clarify the behaviour of commit() method?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to