Using this URL made it work:
jdbc:hive2://myhost.example.com:21050/;auth=noSasl
On Wed, Aug 31, 2016 at 11:13 AM, Abhi Basu <9000r...@gmail.com> wrote:
> Except spark-sql is geared more towards developers and our users are
> looking for a SQL engine like hive (except faster). :)
>
>
>
> On Wed,
Except spark-sql is geared more towards developers and our users are
looking for a SQL engine like hive (except faster). :)
On Wed, Aug 31, 2016 at 11:11 AM, Pradeep Reddy wrote:
> You could use the JDBC interpreter to setup a new interpreter for Impala.
> That said, Impala is just using your
You could use the JDBC interpreter to setup a new interpreter for Impala.
That said, Impala is just using your hive metastore and enabling
proprietary caching for high performance on your hive database tables
rather than doing map reduce translation of hive queries. Running Spark SQL
on spark inter
How do I setup a connection to impala? Do I need to point to impala-jdbc
jar in dependencies?
Thanks,
Abhi
On Wed, Aug 31, 2016 at 10:36 AM, Abhi Basu <9000r...@gmail.com> wrote:
> OK, got it. Added the hadoop jar to dependencies and it started working.
>
> Thanks.
>
> On Wed, Aug 31, 2016 at
OK, got it. Added the hadoop jar to dependencies and it started working.
Thanks.
On Wed, Aug 31, 2016 at 10:24 AM, Abhi Basu <9000r...@gmail.com> wrote:
> So, path to the jars like /usr/lib/hive/* ?
>
> On Wed, Aug 31, 2016 at 9:53 AM, Jeff Zhang wrote:
>
>> You don't need to copy these jars ma
So, path to the jars like /usr/lib/hive/* ?
On Wed, Aug 31, 2016 at 9:53 AM, Jeff Zhang wrote:
> You don't need to copy these jars manually, just specify them in the
> interpreter setting page.
>
> On Wed, Aug 31, 2016 at 9:52 PM, Abhi Basu <9000r...@gmail.com> wrote:
>
>> Where do these jars ha
You don't need to copy these jars manually, just specify them in the
interpreter setting page.
On Wed, Aug 31, 2016 at 9:52 PM, Abhi Basu <9000r...@gmail.com> wrote:
> Where do these jars have to be placed?
>
> I thought copying the hive-site.xml and pointing to hadoop conf folder in
> zeppelin c
Where do these jars have to be placed?
I thought copying the hive-site.xml and pointing to hadoop conf folder in
zeppelin conf should be enough (like before).
Thanks,
Abhi
On Tue, Aug 30, 2016 at 6:59 PM, Jeff Zhang wrote:
> You need add the following 2 dependencies in the interpreter setting
You need add the following 2 dependencies in the interpreter setting page.
https://zeppelin.apache.org/docs/0.6.1/interpreter/hive.html#dependencies
org.apache.hive:hive-jdbc:0.14.0
org.apache.hadoop:hadoop-common:2.6.0
On Wed, Aug 31, 2016 at 2:39 AM, Abhi Basu <9000r...@gmail.com> wrote:
> F
Folks:
Seems like a config issue.
1. Copied hive-site.xml into /ZEPP_HOME/conf folder
2. Added following to config file:
export JAVA_HOME=/./...export HADOOP_CONF_DIR=/etc/hadoop/conf
I am using Zeppelin after a while, and looks like Hive interpreter is part
of JDBC interpreter now.
Interp
10 matches
Mail list logo