Hi all,

I'm using Flink 1.2.0. I have a distributed system where Flink High
Availability feature is activated. Data is produced using a Kafka broker and
on a TM failure scenario, the cluster restarts. Checkpointing is enabled
with exactly once processing.
Problem encountered is, at the end of data processing I receive duplicated
data and some data are also missing. (ex: if 2000 events are sent it loses
around 800 events and some events are duplicated at the receiving end).

Is this an issue with the Flink version or would it be an issue from my
program logic?



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Data-duplication-on-a-High-Availability-activated-cluster-after-a-Task-Manager-failure-recovery-tp12627.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.

Reply via email to