Hi,
In exactly-once mode, Flink sends processing results to Kafka in a
transaction. It only commits this transaction once the checkpoint
succeeds; otherwise, the transaction is rolled back. So reading the
same records again on recovery should not create duplicates.
You're probably seeing duplicat
Hi,
we are working on a flink pipeline and running into duplicates in case of
checkpoint failures.
The pipeline is running on Flink 1.13.2 and uses the source and sink classes
from the flink kafka connector library.
The checkpointing is set to exactly once and we do care about correctness of