Hello,

We are currently developing a RichParallelSourceFunction<> that reads from
different FileSystem dynamically based on the configuration provided when
starting the job.

When running the tests, adding the hadoop-s3-presto library in the
classpath, we can run the workload without any issues.
However, when using our cluster, in Kubernetes, we have this exception
<https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/FileSystem.java#L514>
raised when getting the file system, we have enabled the fs using the
ENABLE_BUILT_IN_PLUGINS env variable.

The code that initialize and get the FS is the following one, the
configuration contains s3 keys to provide runtime credentials instead of
using the cluster ones:

FileSystem.initialize(configuration, null);
FileSystem.get(<s3_path>);

We strongly believe that the issue comes from the plugin manager, and that
we probably are missing something when initializing/configuring the FS. But
after some days of debugging/testing we still can't figure it out, have you
any idea on what would

Thanks in advance for your help,
Gil

Reply via email to