Re: When queried through hiveContext, does hive executes these queries using its execution engine (default is map-reduce), or spark just reads the data and performs those queries itself?

2016-06-08 Thread lalit sharma
To add on what Vikash said above, bit more internals : 1. There are 2 components which work together to achieve Hive + Spark integration a. HiveContext which extends SqlContext adds logic to add hive specific things e.g. loading jars to talk to underlying metastore db, load configs in hive-site.

Re: can not use udf in hivethriftserver2

2016-05-30 Thread lalit sharma
Can you try adding jar to SPARK_CLASSPATH env variable ? On Mon, May 30, 2016 at 9:55 PM, 喜之郎 <251922...@qq.com> wrote: > HI all, I have a problem when using hiveserver2 and beeline. > when I use CLI mode, the udf works well. > But when I begin to use hiveserver2 and beeline, the udf can not work

Re: Not able pass 3rd party jars to mesos executors

2016-05-11 Thread lalit sharma
Point to note as per docs as well : *Note that jars or python files that are passed to spark-submit should be URIs reachable by Mesos slaves, as the Spark driver doesn’t automatically upload local jars.**http://spark.apache.org/docs/latest/running-on-mesos.html