Hi Jeff Zhang, Thanks for response, could you explain me why this error occurs ?
On Fri, Jun 3, 2016 at 6:15 PM, Jeff Zhang <zjf...@gmail.com> wrote: > One quick solution is to use spark 1.6.1. > > On Fri, Jun 3, 2016 at 8:35 PM, kishore kumar <akishore...@gmail.com> > wrote: > >> Could anyone help me on this issue ? >> >> On Tue, May 31, 2016 at 8:00 PM, kishore kumar <akishore...@gmail.com> >> wrote: >> >>> Hi, >>> >>> We installed spark1.2.1 in single node, running a job in yarn-client >>> mode on yarn which loads data into hbase and elasticsearch, >>> >>> the error which we are encountering is >>> Exception in thread "main" org.apache.spark.SparkException: Job aborted >>> due to stage failure: Task 38 in stage 26800.0 failed 4 times, most recent >>> failure: Lost task 38.3 in stage 26800.0 (TID 4990082, hdprd-c01-r04-03): >>> java.io.FileNotFoundException: >>> /opt/mapr/tmp/hadoop-tmp/hadoop-mapr/nm-local-dir/usercache/sparkuser/appcache/application_1463194314221_211370/spark-3cc37dc7-fa3c-4b98-aa60-0acdfc79c725/28/shuffle_8553_38_0.index >>> (No such file or directory) >>> >>> any idea about this error ? >>> -- >>> Thanks, >>> Kishore. >>> >> >> >> >> -- >> Thanks, >> Kishore. >> > > > > -- > Best Regards > > Jeff Zhang > -- Thanks, Kishore.