Am 24/01/2017 um 02:43 schrieb Matthew Dailey:
In general, Java processes fail with an OutOfMemoryError when your code
and data does not fit into the memory allocated to the runtime. In
Spark, that memory is controlled through the --executor-memory flag.
If you are running Spark on YARN, then YA
Hello everybody,
being quite new to Spark, I am struggling a lot with OutOfMemory exceptions
and "GC overhead limit reached" failures of my jobs, submitted from a
spark-shell and "master yarn".
Playing with --num-executors, --executor-memory and --executor-cores I
occasionally get something done.