[GitHub] zeppelin issue #2011: ZEPPELIN-2102: Restart interpreter automatically

2017-12-01 Thread hero0926
Github user hero0926 commented on the issue: https://github.com/apache/zeppelin/pull/2011 We're meeting same problem as karuppayya. when we trying to unpersist in pyspark+zeppelin, memory dosen't released - so sc.stop for second hand, zeppelin interpreter dies... I concern OOM error c

[GitHub] zeppelin issue #2011: ZEPPELIN-2102: Restart interpreter automatically

2017-02-13 Thread karuppayya
Github user karuppayya commented on the issue: https://github.com/apache/zeppelin/pull/2011 @zjffdu Thanks for your feedback. The change is not specific to spark interpreter. It is generic so that any other interpreter also can initiate a restart . I was targeting free-ing

[GitHub] zeppelin issue #2011: ZEPPELIN-2102: Restart interpreter automatically

2017-02-13 Thread karuppayya
Github user karuppayya commented on the issue: https://github.com/apache/zeppelin/pull/2011 @felixcheung I am not able to repro this scenario now. Restart works fine(Will update the description) When spark goes OOM, the subsequent para runs throw connection refused exception. This

[GitHub] zeppelin issue #2011: ZEPPELIN-2102: Restart interpreter automatically

2017-02-13 Thread zjffdu
Github user zjffdu commented on the issue: https://github.com/apache/zeppelin/pull/2011 I don't think zeppelin should do extra things for spark interpreter. This would cause confusion for user. Interpreter should do general thing for user, any specific thing should be handled by user

[GitHub] zeppelin issue #2011: ZEPPELIN-2102: Restart interpreter automatically

2017-02-13 Thread karuppayya
Github user karuppayya commented on the issue: https://github.com/apache/zeppelin/pull/2011 @zjffdu Yes the remote process will still be up which will consume as much memory as configured for driver. In a multi user environment, we might want to release the resources as soon as the us

[GitHub] zeppelin issue #2011: ZEPPELIN-2102: Restart interpreter automatically

2017-02-13 Thread zjffdu
Github user zjffdu commented on the issue: https://github.com/apache/zeppelin/pull/2011 For case 1, why not creating a new SparkContext ? `sc.stop` only cause the spark app shutdown, but the remote interpreter process should still be alive. Overall, I don't think restarting `Spark