Hi all,
I am getting the following error when loading the
org.apache.phoenix:phoenix-spark:4.4.0-HBase-1.1 dependency from spark
interpreter. I am using Zeppelin Version 0.6.2-SNAPSHOT with spark 1.6.1 and
hdp 2.7.1.
The packages that I am inporting is:
import org.ap
Jeff- Thanks!! I figured the issue, I didn't need to copy the hive-site.xml
to spark_home/conf.
All I needed to do was set the SPARK_HOME environment variable in
"zeppelin-env.sh". That has made the local mode to work as well.
export SPARK_HOME=/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib
AFAIK, kerberos should not related here. ZEPPELIN-1175 just remove
ZEPPELIN_HOME/conf from the classpath of interpreter process. I guess you
put hive-site.xml under ZEPPELIN_HOME/conf, can you try to put it
under SPARK_HOME/conf ?
On Thu, Sep 1, 2016 at 10:01 AM, Pradeep Reddy
wrote:
> I just tr
I just tried enabling kerberos on 0.6.1 and its now able to talk to my hive
metastore, I see all my databases and tables. However the moment I takeout
kerberos and run zeppelin 0.6.1 in local mode by resetting the "master" &
removing spark variables, i see just one default database.
I can live wit
No luck, even after copying the hive-site.xml in interpreter/spark/depI
also tried, downloading 0.6.1 src vs 0.5.6. for the same steps I followed,
0.5.6 is able to talk to my hive metastore, where as the other recent
builds are not, they just show one "default" database.
Thanks,
Pradeep
On W
I think it's related to https://issues.apache.org/jira/browse/ZEPPELIN-1175
which remove some class path when Zeppelin launches interpreter. Could you
please check your hive-site.xml is included in your interpreter process? It
looks like a configuration issue because you can see the default databas
Using this URL made it work:
jdbc:hive2://myhost.example.com:21050/;auth=noSasl
On Wed, Aug 31, 2016 at 11:13 AM, Abhi Basu <9000r...@gmail.com> wrote:
> Except spark-sql is geared more towards developers and our users are
> looking for a SQL engine like hive (except faster). :)
>
>
>
> On Wed,
Except spark-sql is geared more towards developers and our users are
looking for a SQL engine like hive (except faster). :)
On Wed, Aug 31, 2016 at 11:11 AM, Pradeep Reddy wrote:
> You could use the JDBC interpreter to setup a new interpreter for Impala.
> That said, Impala is just using your
You could use the JDBC interpreter to setup a new interpreter for Impala.
That said, Impala is just using your hive metastore and enabling
proprietary caching for high performance on your hive database tables
rather than doing map reduce translation of hive queries. Running Spark SQL
on spark inter
How do I setup a connection to impala? Do I need to point to impala-jdbc
jar in dependencies?
Thanks,
Abhi
On Wed, Aug 31, 2016 at 10:36 AM, Abhi Basu <9000r...@gmail.com> wrote:
> OK, got it. Added the hadoop jar to dependencies and it started working.
>
> Thanks.
>
> On Wed, Aug 31, 2016 at
OK, got it. Added the hadoop jar to dependencies and it started working.
Thanks.
On Wed, Aug 31, 2016 at 10:24 AM, Abhi Basu <9000r...@gmail.com> wrote:
> So, path to the jars like /usr/lib/hive/* ?
>
> On Wed, Aug 31, 2016 at 9:53 AM, Jeff Zhang wrote:
>
>> You don't need to copy these jars ma
So, path to the jars like /usr/lib/hive/* ?
On Wed, Aug 31, 2016 at 9:53 AM, Jeff Zhang wrote:
> You don't need to copy these jars manually, just specify them in the
> interpreter setting page.
>
> On Wed, Aug 31, 2016 at 9:52 PM, Abhi Basu <9000r...@gmail.com> wrote:
>
>> Where do these jars ha
You don't need to copy these jars manually, just specify them in the
interpreter setting page.
On Wed, Aug 31, 2016 at 9:52 PM, Abhi Basu <9000r...@gmail.com> wrote:
> Where do these jars have to be placed?
>
> I thought copying the hive-site.xml and pointing to hadoop conf folder in
> zeppelin c
Where do these jars have to be placed?
I thought copying the hive-site.xml and pointing to hadoop conf folder in
zeppelin conf should be enough (like before).
Thanks,
Abhi
On Tue, Aug 30, 2016 at 6:59 PM, Jeff Zhang wrote:
> You need add the following 2 dependencies in the interpreter setting
Hi Jongyoul- I followed the exact same steps for compiling and setting up
the new build from source as 0.5.6 (only difference is, I acquired the
source for latest build using "git clone")
hive-site.xml was copied to conf directory. But, the spark interpreter is
not talking to the hive metastore. B
Canan
The idea behind Helium is precisely what you mention. Think of Helium as a
way to extend Zeppelin UI. Zeppelin had 2 existing extensibility points:
interpreters and notebook storage. Helium is adding the 3rd extensibility
point, to Zeppelin the UI.
See the demo at
https://cwiki.apache.org/c
I notice Helium of zeppelin, not sure whether we can add widget on zeppelin
UI if Helium is implemented. Thanks
Hello,
Do you copy your hive-site.xml in a proper position?
On Wed, Aug 31, 2016 at 3:52 PM, Pradeep Reddy
wrote:
> nothing obvious. I will stick to 0.5.6 build, until the latest builds
> stabilize.
>
> On Wed, Aug 31, 2016 at 1:39 AM, Jeff Zhang wrote:
>
>> Then I guess maybe you are connecti
18 matches
Mail list logo