Do the JSONs have the same schema overall? Or is each potentially
structured differently?

Best,
Georg

Am Fr., 3. Dez. 2021 um 00:12 Uhr schrieb Kamil ty <kamilt...@gmail.com>:

> Hello,
>
> I'm wondering if there is a possibility to create a parquet streaming file
> sink in Pyflink (in Table API) or in Java Flink (in Datastream api).
>
> To give an example of the expected behaviour. Each element of the stream
> is going to contain a json string. I want to save this stream to parquet
> files without having to explicitly define the schema/types of the messages
> (also using a single sink).
>
> If this is possible, (might be in Java Flink using a custom
> ParquetBulkWriterFactory etc.) any direction for the implementation would
> be appreciated.
>
> Best regards
> Kamil
>

Reply via email to