Can someone help me understand how Flink deals with the following scenario?

I have a job that reads from a source Kafka (starting-offset: latest) and
writes to a sink Kafka with exactly-once execution. Let's say that I have 2
records in the source. The 1st one is processed without issue and the job
fails when the 2nd record is processed due to a parsing error. I want to
update the job with a fix for the 2nd record and resume processing from the
offset of the 2nd record.

However, I can't find a way to stop the job with a savepoint because the
job is in a failed state. If I just cancel the job without a savepoint, the
job will start from the new "latest" offset next time I start it.

Is this a valid case? If so, how to handle this case so that I can resume
processing from the 2nd record's offset after I update the job?


Thanks,
Sharon

Reply via email to