Hi Gil,

you should not need to call FileSystem.initialize. The only entry point
where it's currently necessary is the LocalExecutionEnvironment [1] but
that's a bug. As done now, you are actually circumventing the plugin
manager, so I'm not suprised that it's not working.

[1] https://issues.apache.org/jira/browse/FLINK-11470

On Fri, Oct 29, 2021 at 2:29 PM Gil De Grove <gil.degr...@euranova.eu>
wrote:

> Hello,
>
> We are currently developing a RichParallelSourceFunction<> that reads
> from different FileSystem dynamically based on the configuration provided
> when starting the job.
>
> When running the tests, adding the hadoop-s3-presto library in the
> classpath, we can run the workload without any issues.
> However, when using our cluster, in Kubernetes, we have this exception
> <https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/FileSystem.java#L514>
> raised when getting the file system, we have enabled the fs using the
> ENABLE_BUILT_IN_PLUGINS env variable.
>
> The code that initialize and get the FS is the following one, the
> configuration contains s3 keys to provide runtime credentials instead of
> using the cluster ones:
>
> FileSystem.initialize(configuration, null);
> FileSystem.get(<s3_path>);
>
> We strongly believe that the issue comes from the plugin manager, and that
> we probably are missing something when initializing/configuring the FS. But
> after some days of debugging/testing we still can't figure it out, have you
> any idea on what would
>
> Thanks in advance for your help,
> Gil
>

Reply via email to