Hi
These are my notes on this topic.
-
*YARN Cluster Mode,* the Spark driver runs inside an application master
process which is managed by YARN on the cluster, and the client can go away
after initiating the application. This is invoked with –master yarn
and --deploy-mode
cluster
If cluster runs out of memory, it seems that the executor will be restarted by
cluster manager.
Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux
From: Ascot Moss
Sent: Thursday, July 28, 2016 9:48:13 AM
To: user @spark
Subject:
Hi Ascot
When you run in cluster mode it means your cluster manager will cause your
driver to execute on one of the works in your cluster.
The advantage of this is you can log on to a machine in your cluster and
submit your application and then log out. The application will continue to
run.
Here