You could use the programatic API
<http://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables>
to make the hive queries directly.


On Wed, Aug 20, 2014 at 9:47 AM, Tam, Ken K <ken....@verizon.com> wrote:

> What is the best way to run Hive queries in 1.0.2? In my case. Hive
> queries will be invoked from a middle tier webapp. I am thinking to use the
> Hive JDBC driver.
>
>
>
> Thanks,
>
> Ken
>
>
>
> *From:* Michael Armbrust [mailto:mich...@databricks.com]
> *Sent:* Wednesday, August 20, 2014 9:38 AM
> *To:* Tam, Ken K
> *Cc:* user@spark.apache.org
> *Subject:* Re: Is Spark SQL Thrift Server part of the 1.0.2 release
>
>
>
> No.  It'll be part of 1.1.
>
>
>
> On Wed, Aug 20, 2014 at 9:35 AM, Tam, Ken K <ken....@verizon.com> wrote:
>
> Is Spark SQL Thrift Server part of the 1.0.2 release? If not, which
> release is the target?
>
>
>
> Thanks,
>
> Ken
>
>
>

Reply via email to