Hi Arvid,
I certainly appreciate the points you make regarding schema evolution.
Actually, I did end up writing an avro2sql script to autogen the DDL in the
end.
Thanks,
Sumeet
On Fri, Apr 9, 2021 at 12:13 PM Arvid Heise wrote:
> Hi Sumeet,
>
> The beauty of Avro lies in having reader and writ
Hi Sumeet,
The beauty of Avro lies in having reader and writer schema and schema
compatibility, such that if your schema evolves over time (which will
happen in streaming naturally but is also very common in batch), you can
still use your application as is without modification. For streaming, this
Hi Sumeet,
I’m not a Table/SQL API expert, but from my knowledge, it’s not viable to
derived SQL table schemas from Avro schemas, because table schemas would be the
ground truth by design.
Moreover, one Avro type can be mapped to multiple Flink types, so in practice
maybe it’s also not viable.
Just realized, my question was probably not clear enough. :-)
I understand that the Avro (or JSON for that matter) format can be ingested
as described here:
https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/connect.html#apache-avro-format,
but this still requires the entire table sp