Hi there,
I want to achieve the following usecase: Start Zeppelin 0.9.0 (in docker) on my
local dev machine but let the Spark jobs in the notebook run on a remote
cluster via YARN.
For a few hours already, I try to setup that environment with my companies
Cloudera CDH 6.3.1 development clust
Most likely it is due to network issue, the connection between spark driver
(run in yarn container in yarn cluster mode) and zeppelin server is
bidirectional. It looks like it is due to your spark driver unable to
connect to the zeppelin server.
Theo Diefenthal 于2021年4月2日周五 上午7:48写道:
> Hi there
I don't think anyone has that scale of usage (1k users) of Zeppelin for
now. It is interesting to know your usage scenario.
Carlos Diogo 于2021年4月1日周四 上午1:56写道:
> Hi
> My two cents . The only way I know to scale this would be with a container
> based deployment like open shift . You would have is
Could you share how do you include pyspark's py4j to python interpreter ?
Rui Lu 于2021年3月25日周四 下午10:49写道:
> Hi all,
>
> I’m trying to switch from pyspark interpreter to python interpreter and
> ran into weird errors of py4j like “key error ‘x’” or “invalid command” or
> so when creating spark se
Sorry for late response, I have sent you the invitation.
Danny Cranmer 于2021年3月23日周二 下午6:45写道:
> Hello,
>
> Can you please invite me (dannycran...@apache.org) to the Zeppelin slack
> channel?
>
> Thanks!
>
--
Best Regards
Jeff Zhang
Hi Jeff,
I added one line into zeppelin-env.sh
export
PYTHONPATH=/usr/lib/spark/python:/usr/lib/spark/python/lib/py4j-src.zip:${PYTHONPATH}
# note spark's py4j is newer than zeppelin's
This modification is picked up however as far as I can tell some scripts
for python interpreter initialisation