Hey Ignacia, this error message my be a bit cryptic indeed, but what this means is that this paragraph should be run first, after the interpreter start.
Easiest way to do that - go to Interpreters menu on top and there manually restart particular interpreter the you are using at that notebook. Then get back to the notebook and run that paragraph first. Or just re-start zeppelin and run it first, either should work. Please let me know if that helps! On Fri, Jun 12, 2015 at 2:38 AM, Ignacio Alvarez <ignacioalv...@gmail.com> wrote: > Thanks moon! > > I haven't figured out how to initialize the Spark interpreter after the > dependencies. What I get from the documentation is that I just need to do > this: > > %dep > z.reset() > z.addRepo("cloudera").url(" > https://repository.cloudera.com/artifactory/cloudera-repos/") > z.load("org.apache.hbase:hbase:1.0.0-cdh5.4.") > z.load("org.apache.hbase:hbase-client:1.0.0-cdh5.4.") > z.load("org.apache.hbase:hbase-common:1.0.0-cdh5.4.") > z.load("org.apache.hbase:hbase-server:1.0.0-cdh5.4.") > > > on the first paragraph of the notebook > But I get a response: Must be used before SparkInterpreter (%spark) > initialized > > Can you help me out with further instructions? > > > On Tue, Jun 9, 2015 at 8:11 AM, moon soo Lee <m...@apache.org> wrote: > >> Hi, >> >> For dependency library loading, please check >> >> http://zeppelin.incubator.apache.org/docs/interpreter/spark.html#dependencyloading >> >> And there're nice hbase shell interpreter implementation >> https://github.com/apache/incubator-zeppelin/pull/55 >> >> Thanks, >> moon >> >> >> On Tue, Jun 9, 2015 at 6:30 AM Ignacio Alvarez <ignacioalv...@gmail.com> >> wrote: >> >>> hello all, >>> >>> is there a way to configure the zepellin spark shell to get access to >>> HBase data? >>> >>> Adding this line to spark-defaults.conf --: >>> >>> spark.executor.extraClassPath >>> /opt/cloudera/parcels/CDH/lib/hive/lib/hive-hbase-handler.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-server.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-hadoop2-compat.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-client.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-common.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/htrace-core.jar >>> >>> >>> And then adding the following driver class while submitting the spark >>> job would work on the spark shell: >>> >>> --driver-class-path >>> >>> /opt/cloudera/parcels/CDH/lib/hbase/hbase-server.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-hadoop2-compat.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-client.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-common.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/htrace-core.jar >>> >>> >>> Is there a way to add this configuration to the zepelling notebook? >>> >>> >>> Thanks, >>> >>> Ignacio >>> >> > > > -- > Ignacio Alvarez, PhD > > Research Scientist, Intel Corporation, Hillsboro, OR > > -- -- Kind regards, Alexander.