hi,

I have a running and working cluster with spark 1.3.1, and I tried to
install a new cluster that is working with spark 1.4.0

I ran a job on the new 1.4.0 cluster, and the same job on the old 1.3.1
cluster

After the job finished (in both clusters), I entered the job in the UI, and
in the new 1.4.0 cluster, the workers are marked as KILLED (I didn't killed
them, and every place I checked, the logs and output seems fine):

2       worker-20150613111158-172.31.0.104-37240        4       10240   KILLED  
stdout stderr
1       worker-20150613111158-172.31.15.149-58710       4       10240   KILLED  
stdout stderr
3       worker-20150613111158-172.31.0.196-52939        4       10240   KILLED  
stdout stderr
0       worker-20150613111158-172.31.1.233-53467        4       10240   KILLED  
stdout stderr

In the old 1.3.1 cluster, the workers are marked as EXITED:

1       worker-20150608115639-ip-172-31-6-134.us-west-2.compute.internal-47572  
2
10240   EXITED  stdout stderr
0       worker-20150608115639-ip-172-31-4-169.us-west-2.compute.internal-41828  
2
10240   EXITED  stdout stderr
2       worker-20150608115640-ip-172-31-0-37.us-west-2.compute.internal-32847   
1
10240   EXITED  stdout stderr

Another thing (which I think is related) is that the history server is not
working (even though I can see the logs on s3) 
I didn't killed the jobs on the 1.4.0 cluster. The output seems ok, the logs
on s3 seems fine

does anybody have any idea what is wrong here? with the jobs marked as
KILLED and with the history server

thanks, nizan



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Job-marked-as-killed-in-spark-1-4-tp23305.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to