I think this exception is not because the hadoop version isn't high enough.
It seems that the "s3" URI scheme could not be recognized by
`S3FileSystemFactory`. So it fallbacks to
the `HadoopFsFactory`.

Could you share the debug level jobmanager/taskmanger logs so that we could
confirm whether the
classpath and FileSystem are loaded correctly.



Best,
Yang

Senthil Kumar <senthi...@vmware.com> 于2020年1月17日周五 下午10:57写道:

> Hello all,
>
>
>
> Newbie here!
>
>
>
> We are running in Amazon EMR with the following installed in the EMR
> Software Configuration
>
> Hadoop 2.8.5
>
> JupyterHub 1.0.0
>
> Ganglia 3.7.2
>
> Hive 2.3.6
>
> Flink 1.9.0
>
>
>
> I am trying to get a Streaming job from one S3 bucket into an another S3
> bucket using the StreamingFileSink
>
>
>
> I got the infamous exception:
>
> Caused by: java.lang.UnsupportedOperationException: Recoverable writers on
> Hadoop are only supported for HDFS and for Hadoop version 2.7 or newer
>
>
>
> According to this, I needed to install flink-s3-fs-hadoop-1.9.0.jar in
> /usr/lib/flink/lib
>
>
> https://stackoverflow.com/questions/55517566/amazon-emr-while-submitting-job-for-apache-flink-getting-error-with-hadoop-recov
>
>
>
> That did not work.
>
>
>
> Further googling, revealed for Flink 1.9.0 and above:  (according to this)
>
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/
>
>
>
> it seems that I need to install the jar file in the plugins directory
> (/usr/lib/flink/plugins/s3-fs-hadoop)
>
>
>
> That did not work either.
>
>
>
> At this point, I am not sure what to do and would appreciate some help!
>
>
>
> Cheers
>
> Kumar
>
>
>

Reply via email to