I think you are looking for the savepoints feature:
https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/streaming/savepoints.html
The general idea is to trigger a savepoint, start the second job from
this savepoint (reading from the same topic), and then eventually
cancel the first jo
Hi,
I have simple job that consumes events from Kafka topic and process events
with filter/flat-map only(i.e. no aggregation, no windows, no private state)
The most important constraint in my setup is to continue processing no
matter what(i.e. stopping for few seconds to cancel job and restart it