[ https://issues.apache.org/jira/browse/HIVE-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112005#comment-15112005 ]
Sourabh Jain commented on HIVE-7980: ------------------------------------ I am facing simiar issue but when I try to build spark 1.3.3 version from the above command , I am facing below error: https://issues.apache.org/jira/browse/SPARK-10944 Please help I am missing anything. > Hive on spark issue.. > --------------------- > > Key: HIVE-7980 > URL: https://issues.apache.org/jira/browse/HIVE-7980 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Spark > Affects Versions: spark-branch > Environment: Test Environment is.. > . hive 0.14.0(spark branch version) > . spark > (http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark-assembly-1.1.0-SNAPSHOT-hadoop2.3.0.jar) > . hadoop 2.4.0 (yarn) > Reporter: alton.jung > Assignee: Chao Sun > Fix For: spark-branch > > > .I followed this > guide(https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started). > and i compiled hive from spark branch. in the next step i met the below > error.. > (*i typed the hive query on beeline, i used the simple query using "order > by" to invoke the palleral works > ex) select * from test where id = 1 order by id; > ) > [Error list is] > 2014-09-04 02:58:08,796 ERROR spark.SparkClient > (SparkClient.java:execute(158)) - Error generating Spark Plan > java.lang.NullPointerException > at > org.apache.spark.SparkContext.defaultParallelism(SparkContext.scala:1262) > at > org.apache.spark.SparkContext.defaultMinPartitions(SparkContext.scala:1269) > at > org.apache.spark.SparkContext.hadoopRDD$default$5(SparkContext.scala:537) > at > org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:318) > at > org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generateRDD(SparkPlanGenerator.java:160) > at > org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generate(SparkPlanGenerator.java:88) > at > org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:156) > at > org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.submit(SparkSessionImpl.java:52) > at > org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:77) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) > at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72) > 2014-09-04 02:58:11,108 ERROR ql.Driver (SessionState.java:printError(696)) - > FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > 2014-09-04 02:58:11,182 INFO log.PerfLogger > (PerfLogger.java:PerfLogEnd(135)) - </PERFLOG method=Driver.execute > start=1409824527954 end=1409824691182 duration=163228 > from=org.apache.hadoop.hive.ql.Driver> > 2014-09-04 02:58:11,223 INFO log.PerfLogger > (PerfLogger.java:PerfLogBegin(108)) - <PERFLOG method=releaseLocks > from=org.apache.hadoop.hive.ql.Driver> > 2014-09-04 02:58:11,224 INFO log.PerfLogger > (PerfLogger.java:PerfLogEnd(135)) - </PERFLOG method=releaseLocks > start=1409824691223 end=1409824691224 duration=1 > from=org.apache.hadoop.hive.ql.Driver> > 2014-09-04 02:58:11,306 ERROR operation.Operation > (SQLOperation.java:run(199)) - Error running hive query: > org.apache.hive.service.cli.HiveSQLException: Error while processing > statement: FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > at > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:284) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:146) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:69) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:196) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) > at > org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:508) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:208) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > at java.util.concurrent.FutureTask.run(FutureTask.java:166) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:722) > 2014-09-04 02:58:11,634 INFO exec.ListSinkOperator > (Operator.java:close(580)) - 47 finished. closing... > 2014-09-04 02:58:11,683 INFO exec.ListSinkOperator > (Operator.java:close(598)) - 47 Close done > 2014-09-04 02:58:12,190 INFO log.PerfLogger > (PerfLogger.java:PerfLogBegin(108)) - <PERFLOG method=releaseLocks > from=org.apache.hadoop.hive.ql.Driver> > 2014-09-04 02:58:12,234 INFO log.PerfLogger > (PerfLogger.java:PerfLogEnd(135)) - </PERFLOG method=releaseLocks > start=1409824692190 end=1409824692191 duration=1 > from=org.apache.hadoop.hive.ql.Driver> -- This message was sent by Atlassian JIRA (v6.3.4#6332)