To make sure we are on the same page.
The end goal is to have the
CatalogTable#getTableSchema/TableSource#getTableSchema return a schema
that is compatible with TableSource#getProducedDataType.
If you want to use the new types, you should not implement the
TableSource#getReturnType. Moreover you
Hi Dawid,
We are using a custom connector that is very similar to Flink Kafka
Connector and instantiating TableSchema using a custom class which maps
Avro types to Flink's DataTypes using TableSchema.Builder.
For Array type, we have below mapping:
case ARRAY:
return
DataTypes.A
Hi Ramana,
What connector do you use or how do you instantiate the TableSource?
Also which catalog do you use and how do you register your table in that
catalog?
The problem is that conversion from TypeInformation to DataType produces
legacy types (because they cannot be mapped exactyl 1-1 to the
Hi,
Avro schema contains Array type and we created TableSchema out of the
AvroSchema and created a table in catalog. In the catalog, this specific filed
type shown as ARRAY. We are using
AvroRowDeserializationSchema with the connector and returnType of TableSource
showing Array mapped to LEGACY