hi,

A simple way to recreate the problem - 

I have two servers installations, one with spark 1.3.1, and one with spark
1.4.0

I ran the following on both servers:

root@ip-172-31-6-108 ~]$ spark/bin/spark-shell --total-executor-cores 1


scala> val text = sc.textFile("hdfs:///some-file.txt”); 

scala> text.count()

—here I get the correct output in both servers

At this stage, by checking spark-ui, both are marked as RUNNING

Now, I exit the spark shell (using ctrl+d), and if I check the spark UI now,
the job on 1.3.1 is marked as EXITED, and the job on 1.4.0 is marked as
KILLED

Why is the job marked as killed? 

This is a simple case that simulates problem on a real server I’m running

thanks, nizan



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Job-marked-as-killed-in-spark-1-4-tp23305p23311.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to