I was wondering if there is analogues configuration parameter to
"spark.yarn.executor.nodeLabelExpression"
which restricts which nodes the application master is running on.

One of our clusters runs on AWS with a portion of the nodes being spot
nodes. We would like to force the application master not to run on spot
nodes. For what ever reason, application master is not able to recover in
cases the node where it was running suddenly disappears, which is the case
with spot nodes.

Any guidance on this topic is appreciated.

*Alex Rovner*
*Director, Data Engineering *
*o:* 646.759.0052

* <http://www.magnetic.com/>*

Reply via email to