Okay, found the root cause. Our k8s image got some changes, including a
mess with some jars dependencies around com.fasterxml.jackson ...
Sorry for the inconvenience.
Some earlier log in the driver contained that info...
[2022-03-09 21:54:25,163] ({task-result-getter-3}
Logging.scala[logWarning
Full trace doesn't provide any further details. It looks like this:
Py4JJavaError: An error occurred while calling o337.showString. :
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
in stage 18.0 failed 4 times, most recent failure: Lost task 1.3 in stage
18.0 (TID 220) (
Doesn't quite seem the same. What is the rest of the error -- why did the
class fail to initialize?
On Wed, Mar 9, 2022 at 10:08 AM Andreas Weise
wrote:
> Hi,
>
> When playing around with spark.dynamicAllocation.enabled I face the
> following error after the first round of executors have been ki