Hi,
Thanks all for replying.
1. The code uses data stream api only. In the code, we use
env.setMaxParallsim() api but not use any operator.setMaxParallsim() api.
We do use setParallsim() on each operator.
2. We did set uids for each operator and we can find uids match in two
savepoints.
3. quote
I think you need provide all the parallelism information, such like the
operator info 'Id: b0936afefebc629e050a0f423f44e6ba, maxparallsim: 4096'.
What is the parallelism, the maxparallism maybe be generated from the
parallelism you have set.
Arvid Heise 于2021年1月22日周五 下午11:03写道:
> Hi Lu,
>
> if y
Hi Lu,
if you are using data stream API make sure to set manual uids for each
operator. Only then migrating of savepoints to other major versions of
Flink is supported. [1]
Best,
Arvid
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/savepoints.html
On Fri, Jan 22, 2021 at
Hi Lu,
thanks for reaching out to the community, Lu. Interesting observation.
There's no change between 1.9.1 and 1.11 that could explain this behavior
as far as I can tell. Have you had a chance to debug the code? Can you
provide the code so that we could look into it more closely?
Another thing:
Hi,
We recently migrated from 1.9.1 to flink 1.11 and notice the new job cannot
consume from savepoint taken in 1.9.1. Here is the list of operator id and
max parallelism of savepoints taken in both versions. The only code change
is version upgrade.
savepoint 1.9.1:
```
Id: 8a74550ce6afad759d5f1d