I think I forgot to rsync the slaves with the new compiled jar, I will
give it a try as soon as possible,
Em 04/05/2014 21:35, "Andre Kuhnen" escreveu:
> I compiled spark with SPARK_HADOOP_VERSION=2.4.0 sbt/sbt assembly, fixed
> the s3 dependencies, but I am still getting the same error...
> 14
I compiled spark with SPARK_HADOOP_VERSION=2.4.0 sbt/sbt assembly, fixed
the s3 dependencies, but I am still getting the same error...
14/05/05 00:32:33 WARN TaskSetManager: Loss was due to
org.apache.hadoop.ipc.RemoteException
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.na
Thanks Mayur, the only think that my code is doing is:
read from s3, and saveAsTextFile on hdfs. Like I said, everything is
written correctly, but at the end of the job there is this warnning,
I will try to compile with hadoop 2.4
thanks
2014-05-04 11:17 GMT-03:00 Mayur Rustagi :
> You
You should compile Spark with every hadoop version you use. I am surprised
its working otherwise as HDFS breaks compatibility quite often.
As for this error it comes when your code writes/reads from file that has
already deleted. Are you trying to update a single file in multiple
mappers/reduce par
Please, can anyone give a feedback? thanks
Hello, I am getting this warning after upgrading Hadoop 2.4, when I try to
write something to the HDFS. The content is written correctly, but I do
not like this warning.
DO I have to compile SPARK with hadoop 2.4?
WARN TaskSetManager: Loss was due to
Hello, I am getting this warning after upgrading Hadoop 2.4, when I try to
write something to the HDFS. The content is written correctly, but I do
not like this warning.
DO I have to compile SPARK with hadoop 2.4?
WARN TaskSetManager: Loss was due to org.apache.hadoop.ipc.RemoteException
org.ap