Hello Flink Team,

We at IESE Fraunhofer are evaluating Flink for a project and I'm a bit
frustrated in the moment.

I've wrote a few testcases with the flink API and want to deploy them to
an Flink EC2 Cluster. I setup the cluster using the
karamel receipt which was adressed in the following video

https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=video&cd=1&cad=rja&uact=8&ved=0CDIQtwIwAGoVChMIy86Tq6rQyAIVR70UCh0IRwuJ&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dm_SkhyMV0to&usg=AFQjCNGKUzFv521yg-OTy-1XqS2-rbZKug&bvm=bv.105454873,d.bGg

The setup works fine and the hello-flink app could be run. But afterwards I
want to copy some data from s3 bucket to the local ec2 hdfs cluster.

The hadoop fs -ls s3n.... works as well as cat,...
But if I want to copy the data with distcp the command freezes, and does
not respond until a timeout.

After trying a few things I gave up and start another solution. I want to
access the s3 Bucket directly with flink and import it using a
small flink programm which just reads s3 and writes to local hadoop. This
works fine locally, but on cluster the S3NFileSystem class is missing
(ClassNotFound Exception) althoug it is included in the jar file of the
installation.


I forked the chef receipt and updated to flink 0.9.1 but the same issue.

Is there another simple script to install flink with hadoop on an ec2
cluster and working s3n filesystem?



Freelancer

on Behalf of Fraunhofer IESE Kaiserslautern

-- 

Viele Grüße



Thomas Götzinger

Freiberuflicher Informatiker



Glockenstraße 2a

D-66882 Hütschenhausen OT Spesbach

Mobil: +49 (0)176 82180714

Homezone: +49 (0) 6371 735083

Privat: +49 (0) 6371 954050

mailto:m...@simplydevelop.de <thomas.goetzin...@kajukin.de>

epost: thomas.goetzin...@epost.de

Reply via email to