BTW, I am using a 6-node cluster with m4.2xlarge machines on amazon. I have
tried with both yarn-cluster and spark's native cluster mode as well.

On Tue, May 24, 2016 at 12:10 PM Mathieu Longtin <math...@closetwork.org>
wrote:

> I have been seeing the same behavior in standalone with a master.
>
>
> On Tue, May 24, 2016 at 3:08 PM Pradeep Nayak <pradeep1...@gmail.com>
> wrote:
>
>>
>>
>> I have posted the same question of Stack Overflow:
>> http://stackoverflow.com/questions/37421852/spark-submit-continues-to-hang-after-job-completion
>>
>> I am trying to test spark 1.6 with hdfs in AWS. I am using the wordcount
>> python example available in the examples folder. I submit the job with
>> spark-submit, the job completes successfully and its prints the results on
>> the console as well. The web-UI also says its completed. However the
>> spark-submit never terminates. I have verified that the context is stopped
>> in the word count example code as well.
>>
>> What could be wrong ?
>>
>> This is what I see on the console.
>>
>>
>> 6-05-24 14:58:04,749 INFO  [Thread-3] handler.ContextHandler 
>> (ContextHandler.java:doStop(843)) - stopped 
>> o.s.j.s.ServletContextHandler{/stages/stage,null}2016-05-24 14:58:04,749 
>> INFO  [Thread-3] handler.ContextHandler (ContextHandler.java:doStop(843)) - 
>> stopped o.s.j.s.ServletContextHandler{/stages/json,null}2016-05-24 
>> 14:58:04,749 INFO  [Thread-3] handler.ContextHandler 
>> (ContextHandler.java:doStop(843)) - stopped 
>> o.s.j.s.ServletContextHandler{/stages,null}2016-05-24 14:58:04,749 INFO  
>> [Thread-3] handler.ContextHandler (ContextHandler.java:doStop(843)) - 
>> stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}2016-05-24 
>> 14:58:04,750 INFO  [Thread-3] handler.ContextHandler 
>> (ContextHandler.java:doStop(843)) - stopped 
>> o.s.j.s.ServletContextHandler{/jobs/job,null}2016-05-24 14:58:04,750 INFO  
>> [Thread-3] handler.ContextHandler (ContextHandler.java:doStop(843)) - 
>> stopped o.s.j.s.ServletContextHandler{/jobs/json,null}2016-05-24 
>> 14:58:04,750 INFO  [Thread-3] handler.ContextHandler 
>> (ContextHandler.java:doStop(843)) - stopped 
>> o.s.j.s.ServletContextHandler{/jobs,null}2016-05-24 14:58:04,802 INFO  
>> [Thread-3] ui.SparkUI (Logging.scala:logInfo(58)) - Stopped Spark web UI at 
>> http://172.30.2.239:40402016-05-24 14:58:04,805 INFO  [Thread-3] 
>> cluster.SparkDeploySchedulerBackend (Logging.scala:logInfo(58)) - Shutting 
>> down all executors2016-05-24 14:58:04,805 INFO  [dispatcher-event-loop-2] 
>> cluster.SparkDeploySchedulerBackend (Logging.scala:logInfo(58)) - Asking 
>> each executor to shut down2016-05-24 14:58:04,814 INFO  
>> [dispatcher-event-loop-5] spark.MapOutputTrackerMasterEndpoint 
>> (Logging.scala:logInfo(58)) - MapOutputTrackerMasterEndpoint 
>> stopped!2016-05-24 14:58:04,818 INFO  [Thread-3] storage.MemoryStore 
>> (Logging.scala:logInfo(58)) - MemoryStore cleared2016-05-24 14:58:04,818 
>> INFO  [Thread-3] storage.BlockManager (Logging.scala:logInfo(58)) - 
>> BlockManager stopped2016-05-24 14:58:04,820 INFO  [Thread-3] 
>> storage.BlockManagerMaster (Logging.scala:logInfo(58)) - BlockManagerMaster 
>> stopped2016-05-24 14:58:04,821 INFO  [dispatcher-event-loop-3] 
>> scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint 
>> (Logging.scala:logInfo(58)) - OutputCommitCoordinator stopped!2016-05-24 
>> 14:58:04,824 INFO  [Thread-3] spark.SparkContext (Logging.scala:logInfo(58)) 
>> - Successfully stopped SparkContext2016-05-24 14:58:04,827 INFO  
>> [sparkDriverActorSystem-akka.actor.default-dispatcher-2] 
>> remote.RemoteActorRefProvider$RemotingTerminator 
>> (Slf4jLogger.scala:apply$mcV$sp(74)) - Shutting down remote 
>> daemon.2016-05-24 14:58:04,828 INFO  
>> [sparkDriverActorSystem-akka.actor.default-dispatcher-2] 
>> remote.RemoteActorRefProvider$RemotingTerminator 
>> (Slf4jLogger.scala:apply$mcV$sp(74)) - Remote daemon shut down; proceeding 
>> with flushing remote transports.2016-05-24 14:58:04,843 INFO  
>> [sparkDriverActorSystem-akka.actor.default-dispatcher-2] 
>> remote.RemoteActorRefProvider$RemotingTerminator 
>> (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting shut down.
>>
>>
>> I have to do a ctrl-c to terminate the spark-submit process. This is
>> really a weird problem and I have no idea how to fix this. Please let me
>> know if there are any logs I should be looking at, or doing things
>> differently here.
>>
>>
>> --
> Mathieu Longtin
> 1-514-803-8977
>

Reply via email to