Hi Corey,

As of this PR https://github.com/apache/spark/pull/5297/files, this can be
controlled with spark.yarn.submit.waitAppCompletion.

-Sandy

On Thu, May 28, 2015 at 11:48 AM, Corey Nolet <cjno...@gmail.com> wrote:

> I am submitting jobs to my yarn cluster via the yarn-cluster mode and I'm
> noticing the jvm that fires up to allocate the resources, etc... is not
> going away after the application master and executors have been allocated.
> Instead, it just sits there printing 1 second status updates to the
> console. If I kill it, my job still runs (as expected).
>
> Is there an intended way to stop this from happening and just have the
> local JVM die when it's done allocating the resources and deploying the
> application master?
>

Reply via email to