*Background*
I'm converting some prototype Flink v1.11.1 code that uses
DataSet/DataTable APIs to use the Table API.

*Problem*
When switching to using the Table API, my s3 plugins stopped working.  I
don't know why.  I've added the required maven table dependencies to the
job.

I've tried us moving both the presto and/or the hadoop s3 jars to plugin
subfolders.  No luck.

Any ideas what is wrong?  I'm guessing I'm missing something simple.


*Error*

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
Could not find a file system implementation for scheme 's3p'. The scheme is
directly supported by Flink through the following plugin:
flink-s3-fs-presto. Please ensure that each plugin resides within its own
subfolder within the plugins directory. See
https://ci.apache.org/projects/flink/flink-docs-stable/ops/plugins.html for
more information. If you want to use a Hadoop file system for that scheme,
please add the scheme to the configuration fs.allowed-fallback-filesystems.
For a full list of supported file systems, please see
https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.

at
org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:473)

at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)

at org.apache.flink.core.fs.Path.getFileSystem(Path.java:292)

at
org.apache.flink.table.filesystem.FileSystemTableSink.toStagingPath(FileSystemTableSink.java:232)

... 35 more

*ls of plugins directory (same for taskmanager)*

kubectl exec pod/flink-jobmanager-0  -- ls -l
/opt/flink/plugins/s3-fs-hadoop

total 19520

-rw-r--r-- 1 root root 19985452 Sep 10 06:27 flink-s3-fs-hadoop-1.11.1.jar

Reply via email to