Hi I am using Hadoop 2.4.0 it is not frequent sometimes it happens I dont think my spark logic has any problem if logic would have been wrong it would be failing everyday. I see mostly YARN killed executors so I see executor lost in my driver logs.
On Thu, Feb 25, 2016 at 10:30 PM, Yin Yang <yy201...@gmail.com> wrote: > Which release of hadoop are you using ? > > Can you share a bit about the logic of your job ? > > Pastebinning portion of relevant logs would give us more clue. > > Thanks > > On Thu, Feb 25, 2016 at 8:54 AM, unk1102 <umesh.ka...@gmail.com> wrote: > >> Hi I have spark job which I run on yarn and sometimes it behaves in weird >> manner it shows negative no of tasks in few executors and I keep on >> loosing >> executors I also see no of executors are more than I requested. My job is >> highly tuned not getting OOM or any problem. It is just YARN behaves in a >> way sometimes so that executors keep on getting killed because of resource >> crunching. Please guide how do I control YARN from behaving bad. >> >> >> >> -- >> View this message in context: >> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-6-0-running-jobs-in-yarn-shows-negative-no-of-tasks-in-executor-tp26337.html >> Sent from the Apache Spark User List mailing list archive at Nabble.com. >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org >> For additional commands, e-mail: user-h...@spark.apache.org >> >> >