Hi,

Please also keep in mind that restoring existing Table API jobs from
savepoints when upgrading to a newer minor version of Flink, e.g. 1.16 ->
1.17 is not supported as the topology might change between these versions
due to optimizer changes.

See here for more information:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/concepts/overview/#stateful-upgrades-and-evolution

Regards,
Mate

Hang Ruan <ruanhang1...@gmail.com> ezt írta (időpont: 2023. márc. 25., Szo,
13:38):

> Hi,
>
> I think the SQL job is better. Flink SQL jobs can be easily shared with
> others for debugging. And it is more suitable for flow batch integration.
> For a small part of jobs which can not be expressed through SQL, we will
> choose a job by DataStream API.
>
> Best,
> Hang
>
> ravi_suryavanshi.yahoo.com via user <user@flink.apache.org> 于2023年3月24日周五
> 17:25写道:
>
>> Hello Team,
>> Need your advice on which method is recommended considering don't want to
>> change my query code when the Flink is updated/upgraded to the higher
>> version.
>>
>> Here I am seeking advice for writing the SQL using java code(Table API
>> function and Expression) or using pure SQL.
>>
>> I am assuming that SQL will not have any impact if upgraded to the higher
>> version.
>>
>> Thanks and Regards,
>> Ravi
>>
>

Reply via email to