Thanks. But after setting "spark.shuffle.blockTransferService" to "nio" application fails with Akka Client disassociation.
15/01/27 13:38:11 ERROR TaskSchedulerImpl: Lost executor 3 on wynchcs218.wyn.cnw.co.nz: remote Akka client disassociated 15/01/27 13:38:11 INFO TaskSetManager: Re-queueing tasks for 3 from TaskSet 0.0 15/01/27 13:38:11 WARN TaskSetManager: Lost task 0.3 in stage 0.0 (TID 7, wynchcs218.wyn.cnw.co.nz): ExecutorLostFailure (executor lost) 15/01/27 13:38:11 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job 15/01/27 13:38:11 WARN TaskSetManager: Lost task 1.3 in stage 0.0 (TID 6, wynchcs218.wyn.cnw.co.nz): ExecutorLostFailure (executor lost) 15/01/27 13:38:11 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 15/01/27 13:38:11 INFO TaskSchedulerImpl: Cancelling stage 0 15/01/27 13:38:11 INFO DAGScheduler: Failed to run count at RowMatrix.scala:71 Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 7, wynchcs218.wyn.cnw.co.nz): ExecutorLostFailure (executor lost) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688) at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) at akka.actor.ActorCell.invoke(ActorCell.scala:456) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) at akka.dispatch.Mailbox.run(Mailbox.scala:219) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 15/01/27 13:38:11 INFO DAGScheduler: Executor lost: 3 (epoch 3) 15/01/27 13:38:11 INFO BlockManagerMasterActor: Trying to remove executor 3 from BlockManagerMaster. 15/01/27 13:38:11 INFO BlockManagerMaster: Removed 3 successfully in removeExecutor On Mon, Jan 26, 2015 at 6:34 PM, Aaron Davidson <ilike...@gmail.com> wrote: > This was a regression caused by Netty Block Transfer Service. The fix for > this just barely missed the 1.2 release, and you can see the associated > JIRA here: https://issues.apache.org/jira/browse/SPARK-4837 > > Current master has the fix, and the Spark 1.2.1 release will have it > included. If you don't want to rebuild from master or wait, then you can > turn it off by setting "spark.shuffle.blockTransferService" to "nio". > > On Sun, Jan 25, 2015 at 6:28 PM, Shailesh Birari <sbirar...@gmail.com> > wrote: > >> Can anyone please let me know ? >> I don't want to open all ports on n/w. So, am interested in the property >> by >> which this new port I can configure. >> >> Shailesh >> >> >> >> -- >> View this message in context: >> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-2-How-to-change-Default-Random-port-tp21306p21360.html >> Sent from the Apache Spark User List mailing list archive at Nabble.com. >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org >> For additional commands, e-mail: user-h...@spark.apache.org >> >> >