YARN provides the concept of node labels. You should explore the
"spark.yarn.executor.nodeLabelConfiguration" property.


Cheers,
Asif Abbasi

On Tue, 7 Feb 2017 at 10:21, Alvaro Brandon <alvarobran...@gmail.com> wrote:

> Hello all:
>
> I have the following scenario.
> - I have a cluster of 50 machines with Hadoop and Spark installed on them.
> - I want to launch one Spark application through spark submit. However I
> want this application to run on only a subset of these machines,
> disregarding data locality. (e.g. 10 machines)
>
> Is this possible?. Is there any option in the standalone scheduler, YARN
> or Mesos that allows such thing?.
>
>
>

Reply via email to