Thanks Ted. My concern is how to avoid these kind of user errors on a
production cluster, it would be better if Spark handles this instead of
creating an Executor for every second and fails and overloading the Spark
Master. Shall i report a Spark JIRA to handle this.
Thanks,
Prabhu Joseph
On Mo
I haven't found config knob for controlling the retry count after brief
search.
According to
http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html , default
value for -XX:ParallelGCThreads= seems to be 8.
This seems to explain why you got the VM initialization error.
FYI
On Mon, Feb
Hi All,
When a Spark job (Spark-1.5.2) is submitted with a single executor and if
user passes some wrong JVM arguments with spark.executor.extraJavaOptions,
the first executor fails. But the job keeps on retrying, creating a new
executor and failing every tim*e, *until CTRL-C is pressed*. *Do we