You might be hitting SPARK-1994
<https://issues.apache.org/jira/browse/SPARK-1994>, which is fixed in 1.0.1.


On Mon, Jul 14, 2014 at 11:16 PM, Nick Chammas <nicholas.cham...@gmail.com>
wrote:

> I’m running this query against RDD[Tweet], where Tweet is a simple case
> class with 4 fields.
>
> sqlContext.sql("""
>   SELECT user, COUNT(*) as num_tweets
>   FROM tweets
>   GROUP BY user
>   ORDER BY
>     num_tweets DESC,
>     user ASC
>   ;
> """).take(5)
>
> The first time I run this, it throws the following:
>
> 14/07/15 06:11:51 ERROR TaskSetManager: Task 12.0:0 failed 4 times; aborting 
> job
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 
> 12.0:0 failed 4 times, most recent failure: Exception failure in TID 978 on 
> host ip-10-144-204-254.ec2.internal: java.lang.ClassCastException: 
> java.lang.Long cannot be cast to java.lang.String
>         scala.math.Ordering$String$.compare(Ordering.scala:329)
>         
> org.apache.spark.sql.catalyst.expressions.RowOrdering.compare(Row.scala:227)
>         
> org.apache.spark.sql.catalyst.expressions.RowOrdering.compare(Row.scala:210)
>         java.util.TimSort.mergeLo(TimSort.java:687)
>         java.util.TimSort.mergeAt(TimSort.java:483)
>         java.util.TimSort.mergeCollapse(TimSort.java:410)
>         java.util.TimSort.sort(TimSort.java:214)
>         java.util.TimSort.sort(TimSort.java:173)
>         java.util.Arrays.sort(Arrays.java:659)
>         scala.collection.SeqLike$class.sorted(SeqLike.scala:615)
>         scala.collection.mutable.ArrayOps$ofRef.sorted(ArrayOps.scala:108)
>         
> org.apache.spark.sql.execution.Sort$$anonfun$execute$3$$anonfun$apply$4.apply(basicOperators.scala:154)
>         
> org.apache.spark.sql.execution.Sort$$anonfun$execute$3$$anonfun$apply$4.apply(basicOperators.scala:154)
>         org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
>         org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
>         
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>         org.apache.spark.sql.SchemaRDD.compute(SchemaRDD.scala:110)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>         org.apache.spark.scheduler.Task.run(Task.scala:51)
>         org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>         
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         java.lang.Thread.run(Thread.java:744)
> Driver stacktrace:
>     at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)
>     at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)
>     at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015)
>     at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015)
>     at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
>     at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
>     at scala.Option.foreach(Option.scala:236)
>     at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:633)
>     at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1207)
>     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:456)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:219)
>     at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
>     at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>     at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>     at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>     at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
> If I immediately re-run the query, it works fine. I’ve been able to
> reproduce this a few times. If I run other, simpler SELECT queries first
> and then this one, it also gets around the problem. Strange…
>
> I’m on 1.0.0 on EC2.
>
> Nick
> ​
>
> ------------------------------
> View this message in context: Spark SQL throws ClassCastException on
> first try; works on second
> <http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-throws-ClassCastException-on-first-try-works-on-second-tp9720.html>
> Sent from the Apache Spark User List mailing list archive
> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com.
>

Reply via email to