Hey Ignacia,
this error message my be a bit cryptic indeed, but what this means is that
this paragraph should be run first, after the interpreter start.
Easiest way to do that - go to Interpreters menu on top and there manually
restart particular interpreter the you are using at that notebook. Th
Thanks moon!
I haven't figured out how to initialize the Spark interpreter after the
dependencies. What I get from the documentation is that I just need to do
this:
%dep
z.reset()
z.addRepo("cloudera").url("
https://repository.cloudera.com/artifactory/cloudera-repos/";)
z.load("org.apache.hbase:h
Hi,
For dependency library loading, please check
http://zeppelin.incubator.apache.org/docs/interpreter/spark.html#dependencyloading
And there're nice hbase shell interpreter implementation
https://github.com/apache/incubator-zeppelin/pull/55
Thanks,
moon
On Tue, Jun 9, 2015 at 6:30 AM Ignacio
hello all,
is there a way to configure the zepellin spark shell to get access to HBase
data?
Adding this line to spark-defaults.conf --:
spark.executor.extraClassPath
/opt/cloudera/parcels/CDH/lib/hive/lib/hive-hbase-handler.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-server.jar:/opt/cloudera/