Often java.lang.NoSuchMethodError means that you have more than one version
of a library on your classpath, in this case it looks like hive.

On Thu, Oct 2, 2014 at 8:44 PM, Li HM <hmx...@gmail.com> wrote:

> I have rebuild package with -Phive
> Copied hive-site.xml to conf (I am using hive-0.12)
>
> When I run ./bin/spark-sql, I get java.lang.NoSuchMethodError for every
> command.
>
> What am I missing here?
>
> Could somebody share what would be the right procedure to make it work?
>
> java.lang.NoSuchMethodError:
> org.apache.hadoop.hive.ql.Driver.getResults(Ljava/util/ArrayList;)Z
>         at
> org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:305)
>         at
> org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:272)
>         at
> org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult$lzycompute(NativeCommand.scala:35)
>         at
> org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult(NativeCommand.scala:35)
>         at
> org.apache.spark.sql.hive.execution.NativeCommand.execute(NativeCommand.scala:38)
>         at
> org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:360)
>         at
> org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:360)
>         at
> org.apache.spark.sql.SchemaRDDLike$class.$init$(SchemaRDDLike.scala:58)
>         at org.apache.spark.sql.SchemaRDD.<init>(SchemaRDD.scala:103)
>         at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
>         at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> spark-sql> use mydb;
> OK
> java.lang.NoSuchMethodError:
> org.apache.hadoop.hive.ql.Driver.getResults(Ljava/util/ArrayList;)Z
>         at
> org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:305)
>         at
> org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:272)
>         at
> org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult$lzycompute(NativeCommand.scala:35)
>         at
> org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult(NativeCommand.scala:35)
>         at
> org.apache.spark.sql.hive.execution.NativeCommand.execute(NativeCommand.scala:38)
>         at
> org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:360)
>         at
> org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:360)
>         at
> org.apache.spark.sql.SchemaRDDLike$class.$init$(SchemaRDDLike.scala:58)
>         at org.apache.spark.sql.SchemaRDD.<init>(SchemaRDD.scala:103)
>         at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
>         at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> spark-sql> select count(*) from test;
> java.lang.NoSuchMethodError:
> com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
>         at org.apache.spark.util.collection.OpenHashSet.org
> $apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
>         at
> org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
>         at
> org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
>         at
> org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
>         at
> scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>         at
> org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
>         at
> org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
>         at
> org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
>         at
> org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
>         at
> org.apache.spark.util.collection.SizeTracker$class.takeSample(SizeTracker.scala:78)
>         at
> org.apache.spark.util.collection.SizeTracker$class.afterUpdate(SizeTracker.scala:70)
>         at
> org.apache.spark.util.collection.SizeTrackingVector.$plus$eq(SizeTrackingVector.scala:31)
>         at
> org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:236)
>         at
> org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:126)
>         at
> org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:104)
>         at
> org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:743)
>         at
> org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:594)
>         at
> org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:865)
>         at
> org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:79)
>         at
> org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:68)
>         at
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:36)
>         at
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
>         at
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
>         at org.apache.spark.SparkContext.broadcast(SparkContext.scala:809)
>         at
> org.apache.spark.sql.hive.HadoopTableReader.<init>(TableReader.scala:68)
>         at
> org.apache.spark.sql.hive.execution.HiveTableScan.<init>(HiveTableScan.scala:68)
>         at
> org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:188)
>         at
> org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:188)
>         at
> org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:364)
>         at
> org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:184)
>         at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
>         at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
>         at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
>         at
> org.apache.spark.sql.execution.SparkStrategies$HashAggregation$.apply(SparkStrategies.scala:146)
>         at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
>         at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
>         at
> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:402)
>         at
> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:400)
>         at
> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:406)
>         at
> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:406)
>         at
> org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:406)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:59)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
>         at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
>         at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
>
>
>

Reply via email to