spark.jars and spark.jars.packages is the standard way to adding third
party libraries. And it works for all the native supported modes
(standalone/yarn/mesos and etc). The approach you used only works for old
spark interpreter, and is not a standard way to adding jars for spark
engine (e.g. it won
I add the jar by editing the Spark interpreter on the interpreters page and
adding the path to the jar at the bottom. I am not familiar with the
spark.jars method. Is there a guide for that somewhere? Could that cause
the difference between spark.useNew being set to true versus false?
On Thu, May
>>> adding a Geomesa-Accumulo-Spark jar to the Spark interpreter.
How do you add jar to spark interpreter ? It is encouraged to add jar via
spark.jars
Krentz 于2019年5月24日周五 上午4:53写道:
> Hello - I am looking for insight into an issue I have been having with our
> Zeppelin cluster for a while. We
We use Geomesa on Accumulo with Spark and Zeppelin on a Kerberized cluster
(hdp3). We've had a number of issues, but that one doesn't look familiar.
>From memory, we had to:
Build geomesa spark with Accumulo version to match our cluster, and
libthrift to match Accumulo, and another version change
Hello - I am looking for insight into an issue I have been having with our
Zeppelin cluster for a while. We are adding a Geomesa-Accumulo-Spark jar to
the Spark interpreter. The notebook paragraphs run fine until we try to
access the data, at which point we get an "Unread Block Data" error from
the