Hi Sumeet, I’m not a Table/SQL API expert, but from my knowledge, it’s not viable to derived SQL table schemas from Avro schemas, because table schemas would be the ground truth by design. Moreover, one Avro type can be mapped to multiple Flink types, so in practice maybe it’s also not viable.
Best, Paul Lam > 2021年4月2日 11:34,Sumeet Malhotra <sumeet.malho...@gmail.com> 写道: > > Just realized, my question was probably not clear enough. :-) > > I understand that the Avro (or JSON for that matter) format can be ingested > as described here: > https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/connect.html#apache-avro-format > > <https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/connect.html#apache-avro-format>, > but this still requires the entire table specification to be written in the > "CREATE TABLE" section. Is it possible to just specify the Avro schema and > let Flink map it to an SQL table? > > BTW, the above link is titled "Table API Legacy Connectors", so is this still > supported? Same question for YAML specification. > > Thanks, > Sumeet > > On Fri, Apr 2, 2021 at 8:26 AM Sumeet Malhotra <sumeet.malho...@gmail.com > <mailto:sumeet.malho...@gmail.com>> wrote: > Hi, > > Is it possible to directly import Avro schema while ingesting data into > Flink? Or do we always have to specify the entire schema in either SQL DDL > for Table API or using DataStream data types? From a code maintenance > standpoint, it would be really helpful to keep one source of truth for the > schema somewhere. > > Thanks, > Sumeet