Hi,
If you just want to use s3a, you only need flink-s3-fs-hadoop-1.12.1.jar
in the plugin.
The format flink-sql-parquet_2.11-1.12.1.jar should be in lib.
All other jars are not needed afaik.
On Thu, Feb 11, 2021 at 9:00 AM meneldor wrote:
> Well, i am not sure which of those actually helped
Well, i am not sure which of those actually helped but it works now. I
downloaded the following jars in plugins/s3-fs-hadoop/ :
>
> flink-hadoop-compatibility_2.11-1.12.1.jar
> flink-s3-fs-hadoop-1.12.1.jar
> flink-sql-parquet_2.11-1.12.1.jar
> force-shading-1.12.1.jar
> hadoop-mapreduce-client-cor
Hi,
have tried using the bundled hadoop uber jar [1]. It looks like some Hadoop
dependencies are missing.
Best,
Matthias
[1] https://flink.apache.org/downloads.html#additional-components
On Wed, Feb 10, 2021 at 1:24 PM meneldor wrote:
> Hello,
> I am using PyFlink and I want to write records f
Hello,
I am using PyFlink and I want to write records from the table sql api as
parquet files on AWS S3. I followed the documentations but it seems that
I'm missing some dependencies or/and configuration. Here is the SQL:
> CREATE TABLE sink_table(
> `id` VARCHAR,
> `type` VARCHAR,
>