pr 8, 2015 at 8:29 AM, Mohammed Guller
>> wrote:
>>
>> +1
>>
>>
>>
>> Interestingly, I ran into the exactly the same issue yesterday. I
>> couldn’t find any documentation about which project to include as a
>> dependency in build.sbt to use
: Wednesday, April 8, 2015 6:16 PM
To: Todd Nist
Cc: Mohammed Guller; Michael Armbrust; James Aley; user
Subject: Re: Advice using Spark SQL and Thrift JDBC Server
Hey Guys,
Someone submitted a patch for this just now. It's a very simple fix and we can
merge it soon. However, it's just
*Sent:* Wednesday, April 8, 2015 11:54 AM
> *To:* Mohammed Guller
> *Cc:* Todd Nist; James Aley; user; Patrick Wendell
>
> *Subject:* Re: Advice using Spark SQL and Thrift JDBC Server
>
>
>
> Sorry guys. I didn't realize that
> https://issues.apache.org/jira/browse/SPARK
ailto:mich...@databricks.com]
Sent: Wednesday, April 8, 2015 11:54 AM
To: Mohammed Guller
Cc: Todd Nist; James Aley; user; Patrick Wendell
Subject: Re: Advice using Spark SQL and Thrift JDBC Server
Sorry guys. I didn't realize that
https://issues.apache.org/jira/browse/SPARK-4925 was not fixed yet.
esday, April 8, 2015 5:49 AM
> *To:* James Aley
> *Cc:* Michael Armbrust; user
> *Subject:* Re: Advice using Spark SQL and Thrift JDBC Server
>
>
>
> To use the HiveThriftServer2.startWithContext, I thought one would use the
> following artifact in the build:
>
>
>
>
, 2015 5:49 AM
To: James Aley
Cc: Michael Armbrust; user
Subject: Re: Advice using Spark SQL and Thrift JDBC Server
To use the HiveThriftServer2.startWithContext, I thought one would use the
following artifact in the build:
"org.apache.spark"%% "spark-hive-thriftserver"
To use the HiveThriftServer2.startWithContext, I thought one would use the
following artifact in the build:
"org.apache.spark"%% "spark-hive-thriftserver" % "1.3.0"
But I am unable to resolve the artifact. I do not see it in maven central
or any other repo. Do I need to build Spark and p
Excellent, thanks for your help, I appreciate your advice!
On 7 Apr 2015 20:43, "Michael Armbrust" wrote:
> That should totally work. The other option would be to run a persistent
> metastore that multiple contexts can talk to and periodically run a job
> that creates missing tables. The trade-
That should totally work. The other option would be to run a persistent
metastore that multiple contexts can talk to and periodically run a job
that creates missing tables. The trade-off here would be more complexity,
but less downtime due to the server restarting.
On Tue, Apr 7, 2015 at 12:34 P
Hi Michael,
Thanks so much for the reply - that really cleared a lot of things up for
me!
Let me just check that I've interpreted one of your suggestions for (4)
correctly... Would it make sense for me to write a small wrapper app that
pulls in hive-thriftserver as a dependency, iterates my Parqu
>
> 1) What exactly is the relationship between the thrift server and Hive?
> I'm guessing Spark is just making use of the Hive metastore to access table
> definitions, and maybe some other things, is that the case?
>
Underneath the covers, the Spark SQL thrift server is executing queries
using a
11 matches
Mail list logo