Hi, Ashish

As Timo said, the community currently doesn't have plan to support schema
evolution in Table API,

Best,
Ron

Timo Walther <twal...@apache.org> 于2023年8月15日周二 23:29写道:

> Hi Ashish,
>
> sorry for the late reply. There are currently no concrete plans to
> support schema evolution in Table API. Until recently, Flink version
> evolution was the biggest topic. In the near future we can rediscuss
> query and state evolution in more detail.
>
> Personally, I think we will need either some kind of more flexible data
> type (similar like the JSON type in Postgres) or user-defined types
> (UDT) to ensure a smooth experience.
>
> For now, warming up the state is the only viable solution until internal
> serializers are more flexible.
>
> Regards,
> Timo
>
> On 14.08.23 16:55, Ashish Khatkar wrote:
> > Bumping the thread.
> >
> > On Fri, Aug 4, 2023 at 12:51 PM Ashish Khatkar <akhat...@yelp.com>
> wrote:
> >
> >> Hi all,
> >>
> >> We are using flink-1.17.0 table API and RocksDB as backend to provide a
> >> service to our users to run sql queries. The tables are created using
> the
> >> avro schema and when the schema is changed in a compatible manner i.e
> >> adding a field with default, we are unable to recover the job from the
> >> savepoint. This is mentioned in the flink doc on evolution [1] as well.
> >>
> >> Are there any plans to support schema evolution in the table API? Our
> >> current approach involves rebuilding the entire state by discarding the
> >> output and then utilizing that state in the actual job. This is already
> >> done for table-store [2]
> >>
> >> [1]
> >>
> https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/concepts/overview/#stateful-upgrades-and-evolution
> >> [2]
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-226%3A+Introduce+Schema+Evolution+on+Table+Store
> >>
> >>
> >>
> >
>
>

Reply via email to