by default the driver will start where you have started
sbin/start-master.sh. that is where you start you app SparkSubmit.

The slaves have to have an entry in slaves file

What is the issue here?




Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 6 June 2016 at 18:59, Bryan Cutler <cutl...@gmail.com> wrote:

> I'm not an expert on YARN so anyone please correct me if I'm wrong, but I
> believe the Resource Manager will schedule the application to be run on the
> AM of any node that has a Node Manager, depending on available resources.
> So you would normally query the RM via the REST API to determine that.  You
> can restrict which nodes get scheduled using this propery 
> spark.yarn.am.nodeLabelExpression.
> See here for details
> http://spark.apache.org/docs/latest/running-on-yarn.html
>
> On Mon, Jun 6, 2016 at 9:04 AM, Saiph Kappa <saiph.ka...@gmail.com> wrote:
>
>> How can I specify the node where application master should run in the
>> yarn conf? I haven't found any useful information regarding that.
>>
>> Thanks.
>>
>> On Mon, Jun 6, 2016 at 4:52 PM, Bryan Cutler <cutl...@gmail.com> wrote:
>>
>>> In that mode, it will run on the application master, whichever node that
>>> is as specified in your yarn conf.
>>> On Jun 5, 2016 4:54 PM, "Saiph Kappa" <saiph.ka...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> In yarn-cluster mode, is there any way to specify on which node I want
>>>> the driver to run?
>>>>
>>>> Thanks.
>>>>
>>>
>>
>

Reply via email to