Hi Jie Feng,

As you said, Flink translates SQL queries into streaming programs with
auto-generated operator IDs.
In order to start a SQL query from a savepoint, the operator IDs in the
savepoint must match the IDs in the newly translated program.
Right now this can only be guaranteed if you translate the same query with
the same Flink version (optimizer changes might change the structure of the
resulting plan even if the query is the same).
This is of course a significant limitation, that the community is aware of
and planning to improve in the future.

I'd also like to add that it can be very difficult to assess whether it is
meaningful to start a query from a savepoint that was generated with a
different query.
A savepoint holds intermediate data that is needed to compute the result of
a query.
If you update a query it is very well possible that the result computed by
Flink won't be equal to the actual result of the new query.

Best, Fabian

Am Mo., 6. Juli 2020 um 10:50 Uhr schrieb shadowell <shadow...@126.com>:

>
> Hello, everyone,
>         I have some unclear points when using Flink SQL. I hope to get an
> answer or tell me where I can find the answer.
>         When using the DataStream API, in order to ensure that the job can
> recover the state from savepoint after adjustment, it is necessary to
> specify the uid for the operator. However, when using Flink SQL, the uid of
> the operator is automatically generated. If the SQL logic changes (operator
> order changes), when the task is restored from savepoint, will it cause
> some of the operator states to be unable to be mapped back, resulting in
> state loss?
>
> Thanks~
> Jie Feng
> shadowell
> shadow...@126.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=shadowell&uid=shadowell%40126.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22shadowell%40126.com%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>

Reply via email to