Hi Vinay,

using the HADOOP_CLASSPATH variable on the client machine is the
recommended way to solve this problem.

I'll update the documentation accordingly.


On Wed, Mar 8, 2017 at 10:26 AM, vinay patil <vinay18.pa...@gmail.com>
wrote:

> Hi ,
>
> @Shannon - I am not facing any issue while writing to S3, was getting
> NoClassDef errors when reading the file from S3.
>
> ''Hadoop File System" - I mean I am using FileSystem class of Hadoop to
> read
> the file from S3.
>
> @Stephan - I tried with 1.1.4 , was getting the same issue.
>
> The easiest way I found is to run " hadoop classpath " command, and paste
> its value for export HADOOP_CLASSPATH variable.
>
> This way we don't have to copy any S3 specific jars to Flink lib folder.
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Integrate-Flink-
> with-S3-on-EMR-cluster-tp5894p12101.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>

Reply via email to