Try using s3n instead of s3
Em 06/05/2014 21:19, "kamatsuoka" escreveu:
> I have a Spark app that writes out a file, s3://mybucket/mydir/myfile.txt.
>
> Behind the scenes, the S3 driver creates a bunch of files like
> s3://mybucket//mydir/myfile.txt/part-, as well as the block files like
> s3
I think I forgot to rsync the slaves with the new compiled jar, I will
give it a try as soon as possible,
Em 04/05/2014 21:35, "Andre Kuhnen" escreveu:
> I compiled spark with SPARK_HADOOP_VERSION=2.4.0 sbt/sbt assembly, fixed
> the s3 dependencies, but I am still gettin
(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on
Any ideas?
thanks
2014-05-04 11:53 GMT-03:00 Andre Kuhnen :
> Thanks Mayur, the only think that my code is doing is:
>
> read from s3, and saveAsTextFile on hdfs. Like I said, everything is
> written correctly, but at the end of the
mappers/reduce partitioners?
>
>
> Mayur Rustagi
> Ph: +1 (760) 203 3257
> http://www.sigmoidanalytics.com
> @mayur_rustagi <https://twitter.com/mayur_rustagi>
>
>
>
> On Sun, May 4, 2014 at 5:30 PM, Andre Kuhnen wrote:
>
>> Please, can anyone give a fe
to org.apache.hadoop.ipc.RemoteException
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)
thanks
2014-05-03 13:09 GMT-03:00 Andre Kuhnen :
> Hello, I am getting this warning after upgrading Hadoop 2.4, when I try to
> write something to th
Hello, I am getting this warning after upgrading Hadoop 2.4, when I try to
write something to the HDFS. The content is written correctly, but I do
not like this warning.
DO I have to compile SPARK with hadoop 2.4?
WARN TaskSetManager: Loss was due to org.apache.hadoop.ipc.RemoteException
org.ap
Hello,
I am trying to write multiple files with Spark, but I can not find a way to
do it.
Here is the idea.
val rddKeyValue : Rdd[(String, String)] = rddlines.map( line =>
createKeyValue(line))
now I would like to save this as and all the values inside
the file
I tried to use this after the
Hello,
I am trying to write multiple files with Spark, but I can not find a way to
do it.
Here is the idea.
val rddKeyValue : Rdd[(String, String)] = rddlines.map( line =>
createKeyValue(line))
now I would like to save this as and all the values inside
the file
I tried to use this after the