Thanks Marcelo. I changed the code using CountDownLatch, and it works as expected.
...final CountDownLatch countDownLatch = new CountDownLatch(1); SparkAppListener sparkAppListener = new SparkAppListener(countDownLatch); SparkAppHandle appHandle = sparkLauncher.startApplication(sparkAppListener);Thread sparkAppListenerThread = new Thread(sparkAppListener); sparkAppListenerThread.start();long timeout = 120; countDownLatch.await(timeout, TimeUnit.SECONDS); ... private static class SparkAppListener implements SparkAppHandle.Listener, Runnable { private static final Log log = LogFactory.getLog(SparkAppListener.class); private final CountDownLatch countDownLatch; public SparkAppListener(CountDownLatch countDownLatch) { this.countDownLatch = countDownLatch; } @Override public void stateChanged(SparkAppHandle handle) { String sparkAppId = handle.getAppId(); State appState = handle.getState(); if (sparkAppId != null) { log.info("Spark job with app id: " + sparkAppId + ",\t State changed to: " + appState + " - " + SPARK_STATE_MSG.get(appState)); } else { log.info("Spark job's state changed to: " + appState + " - " + SPARK_STATE_MSG.get(appState)); } if (appState != null && appState.isFinal()) { countDownLatch.countDown(); } } @Override public void infoChanged(SparkAppHandle handle) {} @Override public void run() {} } On Mon, Nov 7, 2016 at 9:46 AM Marcelo Vanzin <van...@cloudera.com> wrote: > On Sat, Nov 5, 2016 at 2:54 AM, Elkhan Dadashov <elkhan8...@gmail.com> > wrote: > > while (appHandle.getState() == null || !appHandle.getState().isFinal()) { > > if (appHandle.getState() != null) { > > log.info("while: Spark job state is : " + appHandle.getState()); > > if (appHandle.getAppId() != null) { > > log.info("\t App id: " + appHandle.getAppId() + "\tState: " > + > > appHandle.getState()); > > } > > } > > } > > This is a ridiculously expensive busy loop, even more so if you > comment out the log lines. Use listeners, or at least sleep a little > bit every once in a while. You're probably starving other processes / > threads of cpu. > > -- > Marcelo > > --------------------------------------------------------------------- > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > >