Hi Tim,
I think you can try setting the option *spark.sql.files.ignoreCorruptFiles *as
*true*. With the option enabled, the Spark jobs will continue to run
when encountering corrupted files and the contents that have been read will
still be returned.
The CSV/JSON data source supports the Permissiv
/facepalm
Here we go: https://issues.apache.org/jira/browse/SPARK-27093
Tim
--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/
-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
Thanks Xiao, it's good to have that validated.
I've created a ticket here: https://issues.apache.org/jira/browse/AVRO-2342
--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/
-
To unsubscribe e-mail: dev-u