Hi, I'm running the spark 0.9.1 in standalone mod. I submitted one job and the driver succeed running to the end, see the log message below:
2014-05-12 10:34:14,358 - [INFO] (Logging.scala:50) - Finished TID 254 in 19 ms on spark-host007 (progress: 62/63) 2014-05-12 10:34:14,359 - [INFO] (Logging.scala:50) - Finished TID 255 in 18 ms on spark-host002 (progress: 63/63) 2014-05-12 10:34:14,359 - [INFO] (Logging.scala:50) - Completed ResultTask(7, 63) 2014-05-12 10:34:14,359 - [INFO] (Logging.scala:50) - Removed TaskSet 7.0, whose tasks have all completed, from pool 2014-05-12 10:34:14,360 - [INFO] (Logging.scala:50) - Stage 7 (take at ComputeTask.java:110) finished in 0.165 s 2014-05-12 10:34:14,360 - [INFO] (Logging.scala:50) - Job finished: take at ComputeTask.java:110, took 0.189718 s 2014-05-12 10:34:14,408 - [INFO] (ContextHandler.java:795) - stopped o.e.j.s.h.ContextHandler{/,null} 2014-05-12 10:34:14,409 - [INFO] (ContextHandler.java:795) - stopped o.e.j.s.h.ContextHandler{/static,null} 2014-05-12 10:34:14,409 - [INFO] (ContextHandler.java:795) - stopped o.e.j.s.h.ContextHandler{/metrics/json,null} 2014-05-12 10:34:14,409 - [INFO] (ContextHandler.java:795) - stopped o.e.j.s.h.ContextHandler{/executors,null} 2014-05-12 10:34:14,410 - [INFO] (ContextHandler.java:795) - stopped o.e.j.s.h.ContextHandler{/environment,null} 2014-05-12 10:34:14,410 - [INFO] (ContextHandler.java:795) - stopped o.e.j.s.h.ContextHandler{/stages,null} 2014-05-12 10:34:14,410 - [INFO] (ContextHandler.java:795) - stopped o.e.j.s.h.ContextHandler{/stages/pool,null} 2014-05-12 10:34:14,410 - [INFO] (ContextHandler.java:795) - stopped o.e.j.s.h.ContextHandler{/stages/stage,null} 2014-05-12 10:34:14,411 - [INFO] (ContextHandler.java:795) - stopped o.e.j.s.h.ContextHandler{/storage,null} 2014-05-12 10:34:14,411 - [INFO] (ContextHandler.java:795) - stopped o.e.j.s.h.ContextHandler{/storage/rdd,null} 2014-05-12 10:34:14,466 - [INFO] (Logging.scala:50) - Shutting down all executors 2014-05-12 10:34:14,468 - [INFO] (Logging.scala:50) - Asking each executor to shut down 2014-05-12 10:34:15,527 - [INFO] (Logging.scala:50) - MapOutputTrackerActor stopped! 2014-05-12 10:34:15,580 - [INFO] (Logging.scala:50) - Selector thread was interrupted! 2014-05-12 10:34:15,581 - [INFO] (Logging.scala:50) - ConnectionManager stopped 2014-05-12 10:34:15,582 - [INFO] (Logging.scala:50) - MemoryStore cleared 2014-05-12 10:34:15,583 - [INFO] (Logging.scala:50) - BlockManager stopped 2014-05-12 10:34:15,584 - [INFO] (Logging.scala:50) - Stopping BlockManagerMaster 2014-05-12 10:34:15,584 - [INFO] (Logging.scala:50) - BlockManagerMaster stopped 2014-05-12 10:34:15,586 - [INFO] (Logging.scala:50) - Successfully stopped SparkContext 2014-05-12 10:34:15,586 - [INFO] (ComputeTask.java:174) - Compute Task success! 2014-05-12 10:34:15,590 - [INFO] (Slf4jLogger.scala:74) - Shutting down remote daemon. 2014-05-12 10:34:15,592 - [INFO] (Slf4jLogger.scala:74) - Remote daemon shut down; proceeding with flushing remote transports. 2014-05-12 10:34:15,631 - [INFO] (Slf4jLogger.scala:74) - Remoting shut down 2014-05-12 10:34:15,632 - [INFO] (Slf4jLogger.scala:74) - Remoting shut down. 2014-05-12 10:34:15,911 - [INFO] (ComputeTask.java:209) - process success! But in the WebUI, it shows FAILED. Did anyone run into this before? What's reason behind this inconsistent state? app-20140512103331-0020 Compute-Task 13 5.0 GB 2014/05/12 10:33:31 root FAILED 19 s Thanks, Cheney