Thanks Mayur,  the only think that my code is doing is:

read from s3,  and saveAsTextFile on hdfs.  Like I said,  everything is
written correctly,  but at the end of the job there is this warnning,
 I will try to compile with hadoop 2.4
thanks




2014-05-04 11:17 GMT-03:00 Mayur Rustagi <mayur.rust...@gmail.com>:

> You should compile Spark with every hadoop version you use. I am surprised
> its working otherwise as HDFS breaks compatibility quite often.
> As for this error it comes when your code writes/reads from file that has
> already deleted. Are you trying to update a single file in multiple
> mappers/reduce partitioners?
>
>
> Mayur Rustagi
> Ph: +1 (760) 203 3257
> http://www.sigmoidanalytics.com
> @mayur_rustagi <https://twitter.com/mayur_rustagi>
>
>
>
> On Sun, May 4, 2014 at 5:30 PM, Andre Kuhnen <andrekuh...@gmail.com>wrote:
>
>> Please, can anyone give a feedback?  thanks
>>
>> Hello, I am getting this warning after upgrading Hadoop 2.4, when I try
>> to write something to the HDFS.   The content is written correctly, but I
>> do not like this warning.
>>
>> DO I have to compile SPARK with hadoop 2.4?
>>
>> WARN TaskSetManager: Loss was due to org.apache.hadoop.ipc.RemoteException
>>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)
>>
>> thanks
>>
>>
>> 2014-05-03 13:09 GMT-03:00 Andre Kuhnen <andrekuh...@gmail.com>:
>>
>> Hello, I am getting this warning after upgrading Hadoop 2.4, when I try
>>> to write something to the HDFS.   The content is written correctly, but I
>>> do not like this warning.
>>>
>>> DO I have to compile SPARK with hadoop 2.4?
>>>
>>> WARN TaskSetManager: Loss was due to
>>> org.apache.hadoop.ipc.RemoteException
>>>
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)
>>>
>>> thanks
>>>
>>>
>>
>

Reply via email to