Hi,

Thanks for the answer.

regarding 2,3, its indeed the solution, but as I mentioned in my question, I
can as well do input checks (using .map) before applying any other rdd
operations. I still think that its overhead.

Regarding 1, this will make all the other rdd operations more complex, as I
will need probably to wrap everything around mapPartitions().

So back to my original question, why simply not allow to catch exception
directly as result from rdd operations? Why spark cannot catch expectations
on the different workers and return them back to the driver where the
developer can catch them and decide on the next step?

 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/pyspark-exception-catch-tp20483p20788.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to