Hi Florian,

Thanks for the good question.

Currently, restart interpreter and re-run the notebook is the only way.

In fact,
z.load() function is available not only in %dep, but also in %spark.

so you can call z.load() [1]

%spark
z.load("...")

in this way. And it supposed to work in runtime, ie without interpreter
restart. However, it does NOT work. Current implementation [2] does not
handle library loading into scala compiler's classpath on runtime,
correctly.

If someone can help fixing this problem, that enable library loading
without interpreter restart.

[1]
https://github.com/apache/incubator-zeppelin/blob/master/spark/src/main/java/org/apache/zeppelin/spark/ZeppelinContext.java#L102
[2]
https://github.com/apache/incubator-zeppelin/blob/master/spark/src/main/java/org/apache/zeppelin/spark/dep/DependencyResolver.java#L262


Thanks,
moon

On Tue, Dec 8, 2015 at 9:15 PM Florian Leitner <
florian.leit...@seleritycorp.com> wrote:

> Hi all,
>
> I am still kind of fresh to Zeppelin, so maybe there is an easier way to
> this issue.
>
> I am developing a (Maven) artifact with code that then run via Spark. So
> to use be able to use that code in Zeppelin, I load it in the first cell of
> the notebook with the dependency "interpreter":
>
> %dep
> z.reset()
> z.load("group:artifact:version-SNAPSHOT")
>
> Then, I can work with that code nicely in my Notebook. Now, each time I
> change the code and generate an updated snapshot of my artifact, the
> problem is that I have to fully restart the Zeppelin interpreter and then
> re-run the notebook.
>
> Is there some simpler way of "refreshing" dependencies (i.e., reloading
> jars) without having to reset the interpreter and restart the whole
> notebook?
>
> Regards,
> Florian
>

Reply via email to