Hi,
I couldn't find any details regarding this recovery mechanism - could
someone please shed some light on this?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-recovery-from-bad-nodes-tp4505p4576.html
Sent from the Apache Spark User List mailing li
Hi,I am unable to see how Shark (eventually Spark) can recover from a bad
node in the cluster. One of my EC2 clusters with 50 nodes ended up with a
single node with datanode corruption, I can see the following error when I'm
trying to load up a simple file into memory using CTAS:
org.apache.hadoop.