Hey,

I'm connecting Zeppelin with a remote Spark standalone cluster (2 worker
nodes) and I noticed that if I run a job from Zeppelin twice without
restarting the Interpreter, it fails on OOME. After the Zeppelin jobs
successfully finishes I can see all executor memory being allocated on
workers and restarting Interpreter frees the memory... But if I don't do it
it fails when running the task again.

Any idea how to deal with this problem? Currently I have to always restart
Interpreter between running spark jobs.

Thanks Jakub

Reply via email to