Please allow me to opinionate on that subject:
To me, there are two options: Indeed you either run the Spark interpreter
in isolated mode, or you have dedicated Spark Interpreter-Groups per
organziational unit, so you can manage dependencies independently.
Obviously, there's no way around restarti
I think whether this is an issue or not, depends a lot on how you use
Zeppelin, and what tools you need to integrate with. Sadly Excel is still
around as a data processing tool, and many people who I introduce to
Zeppelin are quite proficient with it, hence the desire to export to csv in
a trivial
Which version of Zeppelin are you running?
There was a bug, where killing Zeppelin wouldn't actually kill the
interpreters, because the kill signal wasn't passed through, but that was
fixed in 0.7.1: https://issues.apache.org/jira/browse/ZEPPELIN-2258
On Fri, Apr 7, 2017 at 5:31 PM, Ruslan Dautkha
This actually calls for a dependency definition of notes within a notebook,
so the scheduler can decide which tasks to run simultaneously.
I suggest a simple counter of dependency levels, which by default increases
with every new note and can be decremented to allow notes to run
simultaneously. Run
Having gone through configuring Spark 1.6 for Z 0.6.2 without bein able to
use the installer, and using "provided" Spark and Hadoop, I do understand
the appaeal of a test functionality for an interpreter.
The challenge of scoping the test functionality is evident, but I think not
insurmountable.