That's a lot more complicated than you might think.

We've done some basic work to get HiveContext to compile against Hive
1.1.0. Here's the code:
https://github.com/cloudera/spark/commit/00e2c7e35d4ac236bcfbcd3d2805b483060255ec

We didn't sent that upstream because that only solves half of the
problem; the hive-thriftserver is disabled in our CDH build because it
uses a lot of Hive APIs that have been removed in 1.1.0, so even
getting it to compile is really complicated.

If there's interest in getting the HiveContext part fixed up I can
send a PR for that code. But at this time I don't really have plans to
look at the thrift server.


On Mon, Apr 27, 2015 at 11:58 AM, Punyashloka Biswal
<punya.bis...@gmail.com> wrote:
> Dear Spark devs,
>
> Is there a plan for staying up-to-date with current (and future) versions
> of Hive? Spark currently supports version 0.13 (June 2014), but the latest
> version of Hive is 1.1.0 (March 2015). I don't see any Jira tickets about
> updating beyond 0.13, so I was wondering if this was intentional or it was
> just that nobody had started work on this yet.
>
> I'd be happy to work on a PR for the upgrade if one of the core developers
> can tell me what pitfalls to watch out for.
>
> Punya



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to