Github user hero0926 commented on the issue:
https://github.com/apache/zeppelin/pull/2011
We're meeting same problem as karuppayya. when we trying to unpersist in
pyspark+zeppelin, memory dosen't released - so sc.stop for second hand,
zeppelin interpreter dies... I concern OOM error c
Github user karuppayya commented on the issue:
https://github.com/apache/zeppelin/pull/2011
@zjffdu Thanks for your feedback.
The change is not specific to spark interpreter.
It is generic so that any other interpreter also can initiate a restart .
I was targeting free-ing
Github user karuppayya commented on the issue:
https://github.com/apache/zeppelin/pull/2011
@felixcheung I am not able to repro this scenario now. Restart works
fine(Will update the description)
When spark goes OOM, the subsequent para runs throw connection refused
exception. This
Github user zjffdu commented on the issue:
https://github.com/apache/zeppelin/pull/2011
I don't think zeppelin should do extra things for spark interpreter. This
would cause confusion for user. Interpreter should do general thing for user,
any specific thing should be handled by user
Github user karuppayya commented on the issue:
https://github.com/apache/zeppelin/pull/2011
@zjffdu Yes the remote process will still be up which will consume as much
memory as configured for driver. In a multi user environment, we might want to
release the resources as soon as the us
Github user zjffdu commented on the issue:
https://github.com/apache/zeppelin/pull/2011
For case 1, why not creating a new SparkContext ? `sc.stop` only cause the
spark app shutdown, but the remote interpreter process should still be alive.
Overall, I don't think restarting `Spark