Hi
恩,  重新试了下, 这种是可以的, 前面是我操作错了, 谢谢~
Thx

在 2020-08-10 13:36:36,"Yang Wang" <[email protected]> 写道:
>你是自己打了一个新的镜像,把flink-shaded-hadoop-2-uber-2.8.3-10.0.jar放到lib下面了吗
>如果是的话不应该有这样的问题
>
>Best,
>Yang
>
>RS <[email protected]> 于2020年8月10日周一 下午12:04写道:
>
>> Hi,
>> 我下载了flink-shaded-hadoop-2-uber-2.8.3-10.0.jar, 然后放到了lib下, 重启了集群,
>> 但是启动任务还是会报错:
>> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> Could not find a file system implementation for scheme 'hdfs'. The scheme
>> is not directly supported by Flink and no Hadoop file system to support
>> this scheme could be loaded. For a full list of supported file systems,
>> please see
>> https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.
>> at
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:491)
>> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
>> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:292)
>> at
>> org.apache.flink.runtime.state.filesystem.FsCheckpointStorage.<init>(FsCheckpointStorage.java:64)
>> at
>> org.apache.flink.runtime.state.filesystem.FsStateBackend.createCheckpointStorage(FsStateBackend.java:501)
>> at
>> org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createCheckpointStorage(RocksDBStateBackend.java:465)
>> at
>> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:301)
>> ... 22 more
>> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> Hadoop is not in the classpath/dependencies.
>> at
>> org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58)
>> at
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:487)
>> ... 28 more
>>
>>
>> lib下有这些jar包
>> $ ls lib/
>> avro-1.8.2.jar
>> flink-avro-1.11.1-sql-jar.jar
>> flink-connector-jdbc_2.12-1.11.1.jar
>> flink-csv-1.11.1.jar
>> flink-dist_2.12-1.11.1.jar
>> flink-json-1.11.1.jar
>> flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
>> flink-shaded-zookeeper-3.4.14.jar
>> flink-sql-connector-kafka_2.12-1.11.1.jar
>> flink-table_2.12-1.11.1.jar
>> flink-table-blink_2.12-1.11.1.jar
>> kafka-clients-2.5.0.jar
>> log4j-1.2-api-2.12.1.jar
>> log4j-api-2.12.1.jar
>> log4j-core-2.12.1.jar
>> log4j-slf4j-impl-2.12.1.jar
>> mysql-connector-java-5.1.49.jar
>>
>>
>>
>> 在 2020-08-10 10:13:44,"Yang Wang" <[email protected]> 写道:
>> >Matt Wang是正确的
>> >
>> >目前Flink发布的binary和镜像里面都没有flink-shaded-hadoop,所以需要你在官方镜像的基础再加一层
>> >把flink-shaded-hadoop[1]打到/opt/flink/lib目录下
>> >
>> >FROM flinkCOPY /path/of/flink-shaded-hadoop-2-uber-*.jar $FLINK_HOME/lib/
>> >
>> >
>> >[1].
>> >
>> https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop-2-uber
>> >
>> >
>> >Best,
>> >Yang
>> >
>> >Matt Wang <[email protected]> 于2020年8月7日周五 下午5:22写道:
>> >
>> >> 官网的镜像只包含 Flink 相关的内容,如果需要连接 HDFS,你需要将 Hadoop 相关包及配置打到镜像中
>> >>
>> >>
>> >> --
>> >>
>> >> Best,
>> >> Matt Wang
>> >>
>> >>
>> >> 在2020年08月7日 12:49,caozhen<[email protected]> 写道:
>> >> 顺手贴一下flink1.11.1的hadoop集成wiki:
>> >>
>> >>
>> https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/hadoop.html
>> >>
>> >> 根据官网说不再提供flink-shaded-hadoop-2-uber。并给出以下两种解决方式
>> >>
>> >> 1、建议使用HADOOP_CLASSPATH加载hadoop依赖
>> >> 2、或者将hadoop依赖放到flink客户端lib目录下
>> >>
>> >> *我在用1.11.1 flink on
>> >>
>> >>
>> yarn时,使用的是第二种方式,下载hadoop-src包,将一些常用依赖拷贝到lib目录下。(这可能会和你的mainjar程序发生类冲突问题,需要调试)
>> >>
>> >> 我觉得目前这种方式不好,只是暂时解决问题。还是应该有flink-shaded-hadoop包,正在尝试打包,有些问题还没完全解决。
>> >> *
>> >>
>> >>
>> >>
>> >> --
>> >> Sent from: http://apache-flink.147419.n8.nabble.com/
>>

回复