Hi,

I'm a bit new to Flink and I'm trying to figure out what's the best way to
make an upgrade for my current running topology without having duplicate
messages being sent by the the sink. (One time prior the upgrade and one
time after).

I thought that the "atomic" part of the savepoint & cancel suggested that i
can just take a savepoint and cancel the job at the same time and later on
start from that savepoint and that would be it.

Having tried that, it seems that I got many duplicated messages sent by the
kafka producer sink again after the restore from savepoint.

Is that suppose to happen?
Did I misunderstood the "atomic" meaning?

Thanks,
Or.

Reply via email to