If you only use these libraries in driver side, then this is the right
approach. If you use these libraries in spark UDF (which means they are
used in executors as well), then you need to install them in executor too.
Manuel Sopena Ballesteros 于2019年8月14日周三 上午11:07写道:
> Dear Zeppelin user commu
Dear Zeppelin user community,
I am trying to setup python and R to submit jobs through Spark cluster. This is
already done but now I need to enable the users to install their own libraries.
I was thinking to ask the users to setup conda in their home directory and
modify the `zeppelin.pyspark.p