dungnguyen created ZEPPELIN-3205:
------------------------------------

             Summary: restarting interpreter setting in a notebook abort 
running jobs of other notebooks
                 Key: ZEPPELIN-3205
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3205
             Project: Zeppelin
          Issue Type: Bug
            Reporter: dungnguyen


I'm aware that there is resolved issues

https://issues.apache.org/jira/browse/ZEPPELIN-1770 

But it's pretty simple to reproduce, I can configure spark or python 
interpreters in per-note isolated mode, and start a long running job in 2 
notebooks #1 and #2. If I restart spark or python (depends on the type of 
running job) interpreter in notebook #1, the job in notebook #2 is aborted. It 
is worse for pyspark since not just the job is aborted, the pyspark python 
process of notebook #2 is also killed, but notebook #2 will be hanging 
afterward, the only way to fix is to restart notebook #2

I also found a related issue for python interpreter

https://issues.apache.org/jira/browse/ZEPPELIN-3171



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to