How are you running jobs? Do you schedule a notebook to run from Zeppelin?
 
Date: Mon, 30 Nov 2015 12:42:16 +0100
Subject: Spark worker memory not freed up after zeppelin run finishes
From: liska.ja...@gmail.com
To: users@zeppelin.incubator.apache.org

Hey,
I'm connecting Zeppelin with a remote Spark standalone cluster (2 worker nodes) 
and I noticed that if I run a job from Zeppelin twice without restarting the 
Interpreter, it fails on OOME. After the Zeppelin jobs successfully finishes I 
can see all executor memory being allocated on workers and restarting 
Interpreter frees the memory... But if I don't do it it fails when running the 
task again.
Any idea how to deal with this problem? Currently I have to always restart 
Interpreter between running spark jobs.
Thanks Jakub                                      

Reply via email to