Which release of hadoop are you using ?

Can you share a bit about the logic of your job ?

Pastebinning portion of relevant logs would give us more clue.

Thanks

On Thu, Feb 25, 2016 at 8:54 AM, unk1102 <umesh.ka...@gmail.com> wrote:

> Hi I have spark job which I run on yarn and sometimes it behaves in weird
> manner it shows negative no of tasks in few executors and I keep on loosing
> executors I also see no of executors are more than I requested. My job is
> highly tuned not getting OOM or any problem. It is just YARN behaves in a
> way sometimes so that executors keep on getting killed because of resource
> crunching. Please guide how do I control YARN from behaving bad.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-6-0-running-jobs-in-yarn-shows-negative-no-of-tasks-in-executor-tp26337.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to