No, it's a Scala application. Unfortunately after I came across problems
with running using mesos coarse mode and this issue I decided to do the
downgrade to Spark 0.9.1
and purged logs.But I as far as I can remember I tried to run my app using
Spark standalone mode there was also the same ClassNotFoundException
reported.

M.


2014-06-04 18:23 GMT+02:00 Mark Hamstra <m...@clearstorydata.com>:

> Actually, what the stack trace is showing is the result of an exception
> being thrown by the DAGScheduler's event processing actor.  What happens is
> that the Supervisor tries to shut down Spark when an exception is thrown by
> that actor.  As part of the shutdown procedure, the DAGScheduler tries to
> cancel any jobs running on the cluster, but the scheduler backend for Mesos
> doesn't yet implement killTask, so the shutdown procedure fails with an
> UnsupportedOperationException.
>
> In other words, the stack trace is all about failure to cleanly shut down
> in response to some prior failure.  What that prior, root-cause failure
> actually was is not clear to me from the stack trace or bug report, but at
> least the failure to shut down should be fixed in Spark 1.0.1 after PR 686
> <https://github.com/apache/spark/pull/686> is merged.
>
> Was this an application created with the Python API?  There have been some
> similar bug reports associated with Python applications, but I'm not sure
> at this point that the problem actually resides in PySpark.
>
>
> On Wed, Jun 4, 2014 at 8:38 AM, Daniel Darabos <
> daniel.dara...@lynxanalytics.com> wrote:
>
>>
>> On Tue, Jun 3, 2014 at 8:46 PM, Marek Wiewiorka <
>> marek.wiewio...@gmail.com> wrote:
>>
>>> Hi All,
>>> I've been experiencing a very strange error after upgrade from Spark 0.9
>>> to 1.0 - it seems that saveAsTestFile function is throwing
>>> java.lang.UnsupportedOperationException that I have never seen before.
>>>
>>
>> In the stack trace you quoted, saveAsTextFile is not called. Is it really
>> throwing an exception? Do you have the stack trace from the executor
>> process? I think the exception originates from there, and the scheduler is
>> just reporting it here.
>>
>>
>>> Any hints appreciated.
>>>
>>> scheduler.TaskSetManager: Loss was due to
>>> java.lang.ClassNotFoundException:
>>> org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1 [duplicate 45]
>>> 14/06/03 16:46:23 ERROR actor.OneForOneStrategy:
>>> java.lang.UnsupportedOperationException
>>>         at
>>> org.apache.spark.scheduler.SchedulerBackend$class.killTask(SchedulerBackend.scala:32)
>>>         at
>>> org.apache.spark.scheduler.cluster.mesos.MesosSchedulerBackend.killTask(MesosSchedulerBackend.scala:41)
>>>         at
>>> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$cancelTasks$3$$anonfun$apply$1.apply$mcVJ$sp(TaskSchedulerImpl.scala:185)
>>>         at
>>> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$cancelTasks$3$$anonfun$apply$1.apply(TaskSchedulerImpl.scala:183)
>>>         at
>>> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$cancelTasks$3$$anonfun$apply$1.apply(TaskSchedulerImpl.scala:183)
>>>         at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>>>         at
>>> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$cancelTasks$3.apply(TaskSchedulerImpl.scala:183)
>>>         at
>>> org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$cancelTasks$3.apply(TaskSchedulerImpl.scala:176)
>>>         at scala.Option.foreach(Option.scala:236)
>>>         at
>>> org.apache.spark.scheduler.TaskSchedulerImpl.cancelTasks(TaskSchedulerImpl.scala:176)
>>>         at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages$1.apply$mcVI$sp(DAGScheduler.scala:1058)
>>>         at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages$1.apply(DAGScheduler.scala:1045)
>>>         at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages$1.apply(DAGScheduler.scala:1045)
>>>         at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>>>         at org.apache.spark.scheduler.DAGScheduler.org
>>> <http://org.apache.spark.scheduler.dagscheduler.org/>
>>> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1045)
>>>         at
>>> org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:998)
>>>         at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply$mcVI$sp(DAGScheduler.scala:499)
>>>         at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:499)
>>>         at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$doCancelAllJobs$1.apply(DAGScheduler.scala:499)
>>>         at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>>>         at
>>> org.apache.spark.scheduler.DAGScheduler.doCancelAllJobs(DAGScheduler.scala:499)
>>>         at
>>> org.apache.spark.scheduler.DAGSchedulerActorSupervisor$$anonfun$2.applyOrElse(DAGScheduler.scala:1151)
>>>         at
>>> org.apache.spark.scheduler.DAGSchedulerActorSupervisor$$anonfun$2.applyOrElse(DAGScheduler.scala:1147)
>>>         at
>>> akka.actor.SupervisorStrategy.handleFailure(FaultHandling.scala:295)
>>>         at
>>> akka.actor.dungeon.FaultHandling$class.handleFailure(FaultHandling.scala:253)
>>>         at akka.actor.ActorCell.handleFailure(ActorCell.scala:338)
>>>         at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:423)
>>>         at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
>>>         at
>>> akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
>>>         at akka.dispatch.Mailbox.run(Mailbox.scala:218)
>>>         at
>>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
>>>         at
>>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>>         at
>>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>>         at
>>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>>         at
>>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>>
>>> Thanks,
>>> Marek
>>>
>>
>>
>

Reply via email to