It's not possible for SQL and Table API jobs playing with savepoints yet,
but I
think this is a popular requirement and we should definitely discuss the
solutions
in the following versions.

Best,
Kurt

On Sat, Nov 2, 2019 at 7:24 AM Fanbin Bu <fanbin...@coinbase.com> wrote:

> Kurt,
>
> What do you recommend for Flink SQL to use savepoints?
>
>
>
> On Thu, Oct 31, 2019 at 12:03 AM Yun Tang <myas...@live.com> wrote:
>
>> Hi Fanbin
>>
>>
>>
>> If you do not change the parallelism or add and remove operators, you
>> could still use savepoint to resume your jobs with Flink SQL.
>>
>>
>>
>> However, as far as I know, Flink SQL might not configure the uid
>> currently and I’m pretty sure blink branch contains this part of setting
>> uid to stream node. [1]
>>
>>
>>
>> Already CC Kurt as he could provide more detail information of this.
>>
>>
>>
>> [1]
>> https://github.com/apache/flink/blob/blink/flink-libraries/flink-table/src/main/java/org/apache/flink/table/util/resource/StreamNodeUtil.java#L44
>>
>>
>>
>> Best
>>
>> Yun Tang
>>
>>
>>
>>
>>
>> *From: *Fanbin Bu <fanbin...@coinbase.com>
>> *Date: *Thursday, October 31, 2019 at 1:17 PM
>> *To: *user <user@flink.apache.org>
>> *Subject: *Flink SQL + savepoint
>>
>>
>>
>> Hi,
>>
>>
>>
>> it is highly recommended that we assign the uid to the operator for the
>> sake of savepoint. How do we do this for Flink SQL? According to
>> https://stackoverflow.com/questions/55464955/how-to-add-uid-to-operator-in-flink-table-api,
>> it is not possible.
>>
>>
>>
>> Does that mean, I can't use savepoint to restart my program if I use
>> Flink SQL?
>>
>>
>>
>> Thanks,
>>
>>
>>
>> Fanbin
>>
>

Reply via email to