>>> It appears that during execution time on the yarn hosts, the native CDH
>>> spark1.5 jars are loaded before the new spark2 jars. I've tried using
>>> spark.yarn.archive to specify the spark2 jars in hdfs as well as using
>>> other spark options, none of which seems to make a difference.
Hi folks,
I'm running Zeppelin 0.7.0 in my Docker image. I'm not able to run any
code and Zeppelin doesn't show any error messages. Please see attached
screenshot.
When directly running Zeppelin through zeppelin.sh instead of in a Docker
container, everything seems to work fine. Does anyone
Thanks. I can reach out to Cloudera, although the same commands seem to be
work via Spak-Shell (see below). So, the issue seems unique to Zeppelin.
Spark context available as 'sc' (master = yarn, app id =
application_1472496315722_481416).
Spark session available as 'spark'.
Welcome to
Hello Apache Zeppelin team,
For our open-source project we built a web component to visualize time
series data. As I like to develop some demo on Zeppelin I developed a
Zeppelin interpreter to communicate with it.
Right now, I have to rebuild the web-app to integrate this component (add a
line in
Hello Apache Zeppelin team,
For our open-source project we built a web component to visualize time
series data. As I like to develop some demo on Zeppelin I developed a
Zeppelin interpreter to communicate with it.
Right now, I have to rebuild the web-app to integrate this component (add a
line in