Create 2 spark interpreter groups, do it in interpreter setting page
instead of editing the json file manually.
Manuel Sopena Ballesteros 于2019年8月13日周二 下午1:13写道:
> Hi,
>
>
>
> Do I need to create 2 spark interpreter groups or can I just create a new
> py3spark interpreter inside eexisting spark
Hi,
Do I need to create 2 spark interpreter groups or can I just create a new
py3spark interpreter inside eexisting spark interpreter group like the example
below?
…
{
"group": "spark",
"name": "pyspark",
"className": "org.apache.zeppelin.spark.PySparkInterpreter",
"properties
Hello Manuel,
In the history server, at the bottom of the screen you can see "Show
incomplete applications". You can then see your running Zeppelin Spark
context and access the logs of the different jobs you run. Does it fit your
need?
From: Manuel Sopena Ballesteros [mailto:manuel...@ga
What do you mean you can't see individual jobs submitted through zeppelin
notebooks?
Manuel Sopena Ballesteros 于2019年8月12日周一 下午3:44写道:
> Dear Zeppelin community,
>
>
>
> I have a Zeppelin installation connected to Spark. I realized that
> zeppelin runs a spark job when it starts but I can’t se
2 Approaches:
1. create 2 spark interpreters, one with python2 and another with python3
2. use generic configuration interpreter.
https://medium.com/@zjffdu/zeppelin-0-8-0-new-features-ea53e8810235
Manuel Sopena Ballesteros 于2019年8月12日周一 下午3:41写道:
>
>
> Dear Zeppelin community,
>
>
>
> I have
Dear Zeppelin community,
I have a Zeppelin installation connected to Spark. I realized that zeppelin
runs a spark job when it starts but I can't see each individual jobs submitted
through zeppelin notebooks.
Is this the expected behavior by design? Is there a way I can see in spark
history ser
Dear Zeppelin community,
I have a zeppelin installation and a spark cluster. I need to provide options
for users to run either python2 or 3 code using pyspark. At the moment the only
way of doing this is by editing the spark interpreter and changing the
`zeppelin.pyspark.python` from python to