These are the components


*java -versionjava version "1.8.0_77"*Java(TM) SE Runtime Environment
(build 1.8.0_77-b03)
Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)



*hadoop versionHadoop 2.6.0*Subversion
https://git-wip-us.apache.org/repos/asf/hadoop.git -r
e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0


*hive --versionHive 2.0.0*

      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\
*version 1.6.1*      /_/
*Using Scala version 2.10.5 (*Java HotSpot(TM) 64-Bit Server VM, Java
1.8.0_77)

Metastore

 BANNER
CON_ID
--------------------------------------------------------------------------------
----------
*Oracle Database 12c* Enterprise Edition Release 12.1.0.2.0 - 64bit
Production

To me all working OK

HTH



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 13 April 2016 at 18:10, Michael Segel <msegel_had...@hotmail.com> wrote:

> Mich
>
> Are you building your own releases from the source?
> Which version of Scala?
>
> Again, the builds seem to be ok and working, but I don’t want to hit some
> ‘gotcha’ if I could avoid it.
>
>
> On Apr 13, 2016, at 7:15 AM, Mich Talebzadeh <mich.talebza...@gmail.com>
> wrote:
>
> Hi,
>
> I am not sure this helps.
>
> we use Spark 1.6 and Hive 2. I also use JDBC (beeline for Hive)  plus
> Oracle and Sybase. They all work fine.
>
>
> HTH
>
> Dr Mich Talebzadeh
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 12 April 2016 at 23:42, Michael Segel <msegel_had...@hotmail.com>
> wrote:
>
>> Hi,
>> This is probably a silly question on my part…
>>
>> I’m looking at the latest (spark 1.6.1 release) and would like to do a
>> build w Hive and JDBC support.
>>
>> From the documentation, I see two things that make me scratch my head.
>>
>> 1) Scala 2.11
>> "Spark does not yet support its JDBC component for Scala 2.11.”
>>
>> So if we want to use JDBC, don’t use Scala 2.11.x (in this case its
>> 2.11.8)
>>
>> 2) Hive Support
>> "To enable Hive integration for Spark SQL along with its JDBC server and
>> CLI, add the -Phive and Phive-thriftserver profiles to your existing
>> build options. By default Spark will build with Hive 0.13.1 bindings.”
>>
>> So if we’re looking at a later release of Hive… lets say 1.1.x … still
>> use the -Phive and Phive-thriftserver . Is there anything else we should
>> consider?
>>
>> Just asking because I’ve noticed that this part of the documentation
>> hasn’t changed much over the past releases.
>>
>> Thanks in Advance,
>>
>> -Mike
>>
>>
>
>

Reply via email to