Re: Error in Hive on Spark

2016-03-23 Thread Xuefu Zhang
Yes, it seems more viable that you integrate your application with HS2 via JDBC or thrift rather than at code level. --Xuefu On Tue, Mar 22, 2016 at 12:01 AM, Stana wrote: > Hi, Xuefu > > You are right. > Maybe I should launch spark-submit by HS2 or Hive CLI ? > > Thanks a lot, > Stana > > > 20

Re: Error in Hive on Spark

2016-03-22 Thread Stana
Hi, Xuefu You are right. Maybe I should launch spark-submit by HS2 or Hive CLI ? Thanks a lot, Stana 2016-03-22 1:16 GMT+08:00 Xuefu Zhang : > Stana, > > I'm not sure if I fully understand the problem. spark-submit is launched in > the same host as your application, which should be able to acc

Re: Error in Hive on Spark

2016-03-21 Thread Xuefu Zhang
Stana, I'm not sure if I fully understand the problem. spark-submit is launched in the same host as your application, which should be able to access hive-exec.jar. Yarn cluster needs the jar also, but HS2 or Hive CLI will take care of that. Since you are not using either of which, then, it's your

Re: Error in Hive on Spark

2016-03-20 Thread Stana
Does anyone have suggestions in setting property of hive-exec-2.0.0.jar path in application? Something like 'hiveConf.set("hive.remote.driver.jar","hdfs://storm0:9000/tmp/spark-assembly-1.4.1-hadoop2.6.0.jar")'. 2016-03-11 10:53 GMT+08:00 Stana : > Thanks for reply > > I have set the property s

Re: Error in Hive on Spark

2016-03-10 Thread Stana
Thanks for reply I have set the property spark.home in my application. Otherwise the application threw 'SPARK_HOME not found exception'. I found hive source code in SparkClientImpl.java: private Thread startDriver(final RpcServer rpcServer, final String clientId, final String secret) throw

Re: Error in Hive on Spark

2016-03-10 Thread Xuefu Zhang
You can probably avoid the problem by set environment variable SPARK_HOME or JVM property spark.home that points to your spark installation. --Xuefu On Thu, Mar 10, 2016 at 3:11 AM, Stana wrote: > I am trying out Hive on Spark with hive 2.0.0 and spark 1.4.1, and > executing org.apache.hadoop.