Hi Spark Users,
I am running some spark jobs which is running every hour.After running for
12 hours master is getting killed giving exception as
*java.lang.OutOfMemoryError: GC overhead limit exceeded*
It look like there is some memory issue in spark master.
Same kind of issue I noticed with sp
Hi,
We have an application that submits several thousands jobs within the same
SparkContext, using a thread pool to run about 50 in parallel. We're
running on YARN using Spark 1.4.1 and seeing a problem where our driver is
killed by YARN due to running beyond physical memory limits (no Java OOM
st