I am curious about Spark fail over scenario, if some executor down,  that
means the JVM crashed. AM will restart the executor, but how about the RDD
data in JVM?  if I didn't persist RDD, does Spark will recompute lost RDD or
just let it lose?  there is some description in Spark site: "Each RDD
remembers the lineage of deterministic operations that were used on a
fault-tolerant input dataset to create it." 

thanks in advance



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-does-Spark-handle-executor-down-RDD-in-this-executor-will-be-recomputed-automatically-tp3422.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to