Yes, it seems more viable that you integrate your application with HS2 via
JDBC or thrift rather than at code level.
--Xuefu
On Tue, Mar 22, 2016 at 12:01 AM, Stana wrote:
> Hi, Xuefu
>
> You are right.
> Maybe I should launch spark-submit by HS2 or Hive CLI ?
>
> Thanks a lot,
> Stana
>
>
> 20
Hi, Xuefu
You are right.
Maybe I should launch spark-submit by HS2 or Hive CLI ?
Thanks a lot,
Stana
2016-03-22 1:16 GMT+08:00 Xuefu Zhang :
> Stana,
>
> I'm not sure if I fully understand the problem. spark-submit is launched in
> the same host as your application, which should be able to acc
Stana,
I'm not sure if I fully understand the problem. spark-submit is launched in
the same host as your application, which should be able to access
hive-exec.jar. Yarn cluster needs the jar also, but HS2 or Hive CLI will
take care of that. Since you are not using either of which, then, it's your
Does anyone have suggestions in setting property of hive-exec-2.0.0.jar
path in application?
Something like
'hiveConf.set("hive.remote.driver.jar","hdfs://storm0:9000/tmp/spark-assembly-1.4.1-hadoop2.6.0.jar")'.
2016-03-11 10:53 GMT+08:00 Stana :
> Thanks for reply
>
> I have set the property s
Thanks for reply
I have set the property spark.home in my application. Otherwise the
application threw 'SPARK_HOME not found exception'.
I found hive source code in SparkClientImpl.java:
private Thread startDriver(final RpcServer rpcServer, final String
clientId, final String secret)
throw
You can probably avoid the problem by set environment variable SPARK_HOME
or JVM property spark.home that points to your spark installation.
--Xuefu
On Thu, Mar 10, 2016 at 3:11 AM, Stana wrote:
> I am trying out Hive on Spark with hive 2.0.0 and spark 1.4.1, and
> executing org.apache.hadoop.