Hi i have a situation, say i have a yarn cluster with 2GB RAM. I'm
submitting 2 spark jobs with "driver-memory 1GB --num-executors 2
--executor-memory 1GB". So i see 2 spark AM running, but they are unable
to allocate workers containers and start actual job. And they are
hanging for a while. Is it possible to set some sort of timeout for
acquiring executors otherwise kill application?
Thanks,
Peter Rudenko
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org