Except spark-sql is geared more towards developers and our users are
looking for a SQL engine like hive (except faster). :)



On Wed, Aug 31, 2016 at 11:11 AM, Pradeep Reddy <pradeepreddy.a...@gmail.com
> wrote:

> You could use the JDBC interpreter to setup a new interpreter for Impala.
> That said, Impala is just using your hive metastore and enabling
> proprietary caching for high performance on your hive database tables
> rather than doing map reduce translation of hive queries. Running Spark SQL
> on spark interpreter can be considered as a reasonable alternative to
> running impala queries through jdbc.
>
> http://www.cloudera.com/documentation/archive/impala/
> 2-x/2-1-x/topics/impala_jdbc.html
>
> Thanks,
> Pradeep
>
>
> On Wed, Aug 31, 2016 at 10:45 AM, Abhi Basu <9000r...@gmail.com> wrote:
>
>> How do I setup a connection to impala? Do I need to point to  impala-jdbc
>> jar in dependencies?
>>
>> Thanks,
>>
>> Abhi
>>
>> On Wed, Aug 31, 2016 at 10:36 AM, Abhi Basu <9000r...@gmail.com> wrote:
>>
>>> OK, got it. Added the hadoop jar to dependencies and it started working.
>>>
>>> Thanks.
>>>
>>> On Wed, Aug 31, 2016 at 10:24 AM, Abhi Basu <9000r...@gmail.com> wrote:
>>>
>>>> So, path to the jars like /usr/lib/hive/* ?
>>>>
>>>> On Wed, Aug 31, 2016 at 9:53 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>>>>
>>>>> You don't need to copy these jars manually, just specify them in the
>>>>> interpreter setting page.
>>>>>
>>>>> On Wed, Aug 31, 2016 at 9:52 PM, Abhi Basu <9000r...@gmail.com> wrote:
>>>>>
>>>>>> Where do these jars have to be placed?
>>>>>>
>>>>>> I thought copying the hive-site.xml and pointing to hadoop conf
>>>>>> folder in zeppelin conf should be enough (like before).
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Abhi
>>>>>>
>>>>>> On Tue, Aug 30, 2016 at 6:59 PM, Jeff Zhang <zjf...@gmail.com> wrote:
>>>>>>
>>>>>>> You need add the following 2 dependencies in the interpreter setting
>>>>>>> page.
>>>>>>>
>>>>>>> https://zeppelin.apache.org/docs/0.6.1/interpreter/hive.html
>>>>>>> #dependencies
>>>>>>>
>>>>>>> org.apache.hive:hive-jdbc:0.14.0
>>>>>>> org.apache.hadoop:hadoop-common:2.6.0
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Aug 31, 2016 at 2:39 AM, Abhi Basu <9000r...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Folks:
>>>>>>>>
>>>>>>>> Seems like a config issue.
>>>>>>>>
>>>>>>>> 1. Copied hive-site.xml into /ZEPP_HOME/conf folder
>>>>>>>> 2. Added following to config file:
>>>>>>>>
>>>>>>>> export JAVA_HOME=/...../...export HADOOP_CONF_DIR=/etc/hadoop/conf
>>>>>>>>
>>>>>>>>
>>>>>>>> I am using Zeppelin after a while, and looks like Hive interpreter
>>>>>>>> is part of JDBC interpreter now.
>>>>>>>> Interpreter properties seem to be set correctly:
>>>>>>>> PropertyValue
>>>>>>>> hive.driver org.apache.hive.jdbc.HiveDriver
>>>>>>>> hive.url jdbc:hive2://localhost:10000
>>>>>>>> hive.user hiveUser
>>>>>>>> hive.password hivePassword
>>>>>>>>
>>>>>>>> When I run %hive from Zeppelin, I get a hive jdbc driver not found
>>>>>>>> error. How do I fix this? Also, how do I configure for Impala within 
>>>>>>>> the
>>>>>>>> JDBC section of interpreters.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Abhi
>>>>>>>>
>>>>>>>> --
>>>>>>>> Abhi Basu
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Best Regards
>>>>>>>
>>>>>>> Jeff Zhang
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Abhi Basu
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards
>>>>>
>>>>> Jeff Zhang
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Abhi Basu
>>>>
>>>
>>>
>>>
>>> --
>>> Abhi Basu
>>>
>>
>>
>>
>> --
>> Abhi Basu
>>
>
>


-- 
Abhi Basu

Reply via email to