This issue on stackoverflow maybe help
https://stackoverflow.com/questions/42641573/why-does-memory-usage-of-spark-worker-increases-with-time/42642233#42642233
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-streaming-exectors-memory-increasing-and-ex
I have had similar issues with some of my spark jobs especially doing
things like repartitioning.
spark.yarn.driver.memoryOverhead driverMemory * 0.10, with minimum of
384 The amount of off-heap memory (in megabytes) to be allocated per
driver in cluster mode. This is memory that accounts for
I add this code in foreachRDD block .
```
rdd.persist(StorageLevel.MEMORY_AND_DISK)
```
This exception no occur agein.But many executor dead showing in spark
streaming UI .
```
User class threw exception: org.apache.spark.SparkException: Job aborted due
to stage failure: Task 21 in stage 1194.0 f
In this kind of question, you always want to tell us the spark version.
Yong
From: darin
Sent: Thursday, March 16, 2017 9:59 PM
To: user@spark.apache.org
Subject: spark streaming exectors memory increasing and executor killed by yarn
Hi,
I got this exception w