Hi Keith, we are running into the same issue here with Spark standalone
1.2.1. I was wondering if you have found a solution or workaround.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Hung-spark-executors-don-t-count-toward-worker-memory-limit-tp16083p245
Maybe I should put this another way. If spark has two jobs, A and B, both
of which consume the entire allocated memory pool, is it expected that
spark can launch B before the executor processes tied to A are completely
terminated?
On Thu, Oct 9, 2014 at 6:57 PM, Keith Simmons wrote:
> Actually,
Actually, it looks like even when the job shuts down cleanly, there can be
a few minutes of overlap between the time the next job launches and the
first job actually terminates it's process. Here's some relevant lines
from my log:
14/10/09 20:49:20 INFO Worker: Asked to kill executor
app-20141009