Hi,
I have a flink streaming job implemented via java which reads some messages 
from a kafka topic, transforms them and finally sends them to another kafka 
topic.
The version of flink is 1.6.2 and the kafka version is 011. I pass the 
Semantic.EXACTLY_ONCE parameter to the producer. The problem is that when I 
cancel the job with savepoint and then restart it using the saved savepoint, I 
have duplicated messages in the sink.
Do I miss some kafka/flink configurations to avoid duplication?



Kind regards,

Nastaran Motavalli


Reply via email to