Thank you for the reply Jack Wu, however that does not satisfy my
requirements, my use case is to have something that supports a schema drift
over avro format. Column addition and column datatype change both types of
variations is what I am trying to solve for. Either way thanks for
the help, much appreciated.

Regards,
Taher Koitawala

On Wed, May 5, 2021 at 3:53 PM Jark Wu <imj...@gmail.com> wrote:

> Hi Taher,
>
> Currently, Flink (SQL) CDC doesn't support automatic schema change
> and doesn't support to consume schema change events in source.
> But you can upgrade schema manually, for example, if you have a table
> with columns [a, b, c], you can define a flink table t1 with these 3
> columns.
> When you add new column in source RDBMS, the Flink SQL job on t1
> should work fine if you are using format 'debezium-json' or
> 'debezium-avro-confluent',
> because they supports schema compatibility.
> When you are notified there is a schema change in the source RDBMS,
> then you can upgrade your Flink SQL DDL and job to include the added
> column,
> and consume from the previous savepoint [1].
>
> Best,
> Jark
>
> [1]:
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sqlclient/#start-a-sql-job-from-a-savepoint
>
>
>
> On Wed, 5 May 2021 at 13:34, Taher Koitawala <taher...@gmail.com> wrote:
>
> > Hi All,
> >          I have a CDC use case where I want to capture and process
> debezium
> > logs that are streamed to Kafka via Debezium. As per all the flink
> examples
> > we have to pre create the schema of the tables where I want to perform a
> > write.
> >
> > However my question is what if there is an alter table modify column data
> > type query that hits the source RDBMS, how does flink handle that schema
> > change and what changes are supported. If someone can give a full example
> > it will be very very helpful.
> >
> >
> > Regards,
> > Taher Koitawala
> >
>

Reply via email to