Hi,

no, just running it manually. I think I need to unpersist cached rdds and
destroy broadcast variables in the end, am I correct? Because it hasn't
crashed since then, the following runs are always a little slower though.

On Thu, Dec 3, 2015 at 8:08 AM, Felix Cheung <felixcheun...@hotmail.com>
wrote:

> How are you running jobs? Do you schedule a notebook to run from Zeppelin?
>
> ------------------------------
> Date: Mon, 30 Nov 2015 12:42:16 +0100
> Subject: Spark worker memory not freed up after zeppelin run finishes
> From: liska.ja...@gmail.com
> To: users@zeppelin.incubator.apache.org
>
> Hey,
>
> I'm connecting Zeppelin with a remote Spark standalone cluster (2 worker
> nodes) and I noticed that if I run a job from Zeppelin twice without
> restarting the Interpreter, it fails on OOME. After the Zeppelin jobs
> successfully finishes I can see all executor memory being allocated on
> workers and restarting Interpreter frees the memory... But if I don't do it
> it fails when running the task again.
>
> Any idea how to deal with this problem? Currently I have to always restart
> Interpreter between running spark jobs.
>
> Thanks Jakub
>

Reply via email to