UPDATE I noticed that it runs using the IntelliJ IDEA but packaging the fat jar and deploying on the cluster will cause the so-called hdfs scheme error!
On Thu, May 9, 2019 at 2:43 AM Soheil Pourbafrani <soheil.i...@gmail.com> wrote: > Hi, > > I used to read data from HDFS on Hadoop2 by adding the following > dependencies: > > <dependency> > <groupId>org.apache.flink</groupId> > <artifactId>flink-java</artifactId> > <version>1.4.0</version> > </dependency> > <dependency> > <groupId>org.apache.flink</groupId> > <artifactId>flink-streaming-java_2.11</artifactId> > <version>1.4.0</version> > </dependency> > <dependency> > <groupId>org.apache.flink</groupId> > <artifactId>flink-clients_2.11</artifactId> > <version>1.4.0</version> > </dependency> > <dependency> > <groupId>org.apache.flink</groupId> > <artifactId>flink-connector-filesystem_2.11</artifactId> > <version>1.4.0</version> > </dependency> > <dependency> > <groupId>org.apache.hadoop</groupId> > <artifactId>hadoop-client</artifactId> > <version>2.7.5</version> > </dependency> > > > But using the Hadoop3 and following dependencies I got the error: > could not find a filesystem implementation for scheme 'hdfs' > > <dependency> > <groupId>org.apache.flink</groupId> > <artifactId>flink-streaming-java_2.11</artifactId> > <version>1.8.0</version> > </dependency> > <dependency> > <groupId>org.apache.flink</groupId> > <artifactId>flink-clients_2.11</artifactId> > <version>1.8.0</version> > </dependency> > <dependency> > <groupId>org.apache.flink</groupId> > <artifactId>flink-connector-filesystem_2.11</artifactId> > <version>1.8.0</version> > </dependency> > <dependency> > <groupId>org.apache.flink</groupId> > <artifactId>flink-hadoop-fs</artifactId> > <version>1.8.0</version> > </dependency> > <dependency> > <groupId>org.apache.hadoop</groupId> > <artifactId>hadoop-client</artifactId> > <version>3.1.2</version> > </dependency> > > How can I resolve that? >