Thanks for pointing this out, Nicholas.
This SPIP seems focused on the Scala side, grouping the exception handling
and providing some guidance about error messages.
Yes, I think we can refer to it on the PySpark side. Probably I will follow
up and file some JIRAs based on how this SPIP gose, and ru
+1 to fail fast. Thanks for reporting this, Jungtaek!
On Mon, Oct 26, 2020 at 8:36 AM Jungtaek Lim
wrote:
> Yeah I'm in favor of fast-fail if things are not working out as end users
> intended. Spark should only fail back when it doesn't make any difference
> but only some sort of performance. (