start instances of the job through the job api.
>
> Unless reading the parquet to a temporary table doesn’t need the schema
> definition? I couldn't really work things out from the links.
>
> Dan
> --
> *From:* Feng Jin
> *Sent:* Thursday, Nove
Dan
From: Feng Jin
Sent: Thursday, November 23, 2023 6:49:11 PM
To: Oxlade, Dan
Cc: user@flink.apache.org
Subject: [EXTERNAL] Re: flink s3[parquet] -> s3[iceberg]
Hi Oxlade
I think using Flink SQL can conveniently fulfill your requirements.
For S3 Parquet files, you can create a t
Hi Oxlade
I think using Flink SQL can conveniently fulfill your requirements.
For S3 Parquet files, you can create a temporary table using a filesystem
connector[1] .
For Iceberg tables, FlinkSQL can easily integrate with the Iceberg
catalog[2].
Therefore, you can use Flink SQL to export S3 file