I had the same impression at some point but this is not how auto-commit works.
Auto-commit can only commit when the application comes back to poll and
if it decides to commit at that time, it will only commit the previous batch.
In your example, the app might come back and have to re-execute all
the records in the uncommitted batch but it will never skip over
unprocessed records.

-----Original Message-----
From: Adam Bellemare [mailto:adam.bellem...@gmail.com] 
Sent: Tuesday, January 29, 2019 3:54 PM
To: dev@kafka.apache.org
Subject: Why is enable.auto.commit=true the default value for consumer?

As the question indicates.

Should this not be default false? I think this is a bit nefarious to
someone launching their application into production without testing it
extensively around failure modes. I can see a scenario where a consumer
polls for events, processes them, produces to output topic, and commits the
offsets. Say it takes 30 seconds for a batch. If it fails halfway through,
upon restarting it will skip everything that was unprocessed/unpublished up
to the committed offset.

Is there a historic reason why it's set to default true? Is it because to
change it to default false it could affect the upgrade path of previous
implementations?

Adam

Reply via email to