Hi Reo,
if you want to reduce downtime, the usual approach is the following:
- Let your job run in 1.9 cluster for a while
- Start a job in 1.10 where you migrate state, but dump output to /dev/null
- As soon as 1.10 job catches up, stop old job and start writing output
into the actual storage.
I
Hey Andrey,
Thanks for your answer.
I know use savepoint to upgrade the flink cluster is the available, but
that mean when I upgrading my flink cluster I need to cancel all jobs from
JM. And the stream process will be stopped, that will have an impact for
the production system which is time sensi
Hi Reo,
I do not think this is always guaranteed by Flink API.
The usual supported way is to:
- take a savepoint
- upgrade the cluster (JM and TM)
- maybe rebuild the job against the new Flink version
- start the job from the savepoint [1]
The externalised checkpoints also do not have to be alwa
Hi all,
I encountered a problem when I upgrade flink from 1.9.1 to 1.10.0.
At first, my job is running on flink stably which JM and TM is flink 1.9.1.
And then I try to upgrade to 1.10.0. I stop the JM progress and start
another JM progress. At this time, the JM is 1.10.0 and the TM is 1.9.1,