Hi,
I tried my question @ stackoverlfow.com (
https://stackoverflow.com/questions/48445145/spark-standalone-mode-application-runs-but-executor-is-killed-with-exitstatus),
yet to be answere, so thought I will tru the user group.

I am new to Apache Spark and was trying to run the example Pi Calculation
application on my local spark setup (using Standalone Cluster). Both the
Master, Slave and Driver are running on my local machine.

What I am noticing is that, the PI is calculated successfully, however in
the slave logs I see that the Worker/Executor is being killed with
exitStatus 1. I do not see any errors/exceptions logged to the console
otherwise. I tried finding help on similar issue, but most of the search
hits were referring to exitStatus 137 etc. (e.g: Spark application kills
executor
https://stackoverflow.com/questions/40910952/spark-application-kills-executor
<https://stackoverflow.com/questions/40910952/spark-application-kills-executor> 
 

I have failed miserably to understand why the Worker is being killed
instead of completing the execution with 'EXITED' state. I think it's
related to how I am executing the app, but am not quite clear what am I
doing wrong.

The code and logs are available @
https://gist.github.com/Chandu/a83c13c045f1d1b480d8839e145b2749
<https://gist.github.com/Chandu/a83c13c045f1d1b480d8839e145b2749>   (trying
to
keep the email content short)

I wantd to understand that if my assumption of an Executor should have a
state of Exited when there are no errors in execution or is it always set
as KILLED when a spark job is completed?

I tried to understand the flow looking at the source code and with my
limited understanding of the code, I found that the Executor would always
end up with KILLED status (most likely my conclusion is wrong) based on the
code @
https://github.com/apache/spark/blob/39e2bad6a866d27c3ca594d15e574a1da3ee84cc/core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala#L118
<https://github.com/apache/spark/blob/39e2bad6a866d27c3ca594d15e574a1da3ee84cc/core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala#L118>
  

Can someone guide me on identifying the root cause for this issue or if my
assumption of the Exectuor having a status of EXITED at the end of
execution is not correct?






--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to