Hello,
i am facing this issue "Job hasn't been submitted after 61s. Aborting it." when i am running multiple hive queries. Details: (Hive on Spark) I am using spark dynamic allocation and external shuffle service (yarn) Assume one queries is using all of the resources in the cluster and when the new querie launched then it throws with this error in hive log 2017-02-16 06:12:59,166 INFO [main]: status.SparkJobMonitor (RemoteSparkJobMonitor.java:startMonitor(67)) -* Job hasn't been submitted after 61s. Aborting it.* 2017-02-16 06:12:59,166 ERROR [main]: status.SparkJobMonitor (SessionState.java:printError(960)) - Status: SENT 2017-02-16 06:12:59,167 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=SparkRunJob start=1487254318158 end=1487254379167 duration=61009 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor> 2017-02-16 06:12:59,183 ERROR [main]: ql.Driver (SessionState.java:printError(960)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask 2017-02-16 06:12:59,184 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=Driver.execute start=1487254317999 end=1487254379184 duration=61185 from=org.apache.hadoop.hive.ql.Driver> 2017-02-16 06:12:59,184 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver> 2017-02-16 06:12:59,184 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks start=1487254379184 end=1487254379184 duration=0 from=org.apache.hadoop.hive.ql.Driver> 2017-02-16 06:12:59,201 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver> 2017-02-16 06:12:59,202 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks start=1487254379201 end=1487254379202 duration=1 from=org.apache.hadoop.hive.ql.Driver> Is there any parameter to config , that the the query should wait until it get the requried resources and it should not fail. Thanks, Naresh