Yes, of course, capacity scheduler also needs to be configured.

On Wed, Dec 16, 2015 at 10:41 AM, 张志强(旺轩) <zzq98...@alibaba-inc.com> wrote:

> one more question , do I have to configure label for my capacity
> scheduler? is this mandatory?
>
>
>
> *发件人:* AllenZ [mailto:zzq98...@alibaba-inc.com]
> *发送时间:* 2015年12月16日 9:21
> *收件人:* 'Ted Yu'
> *抄送:* 'Saisai Shao'; 'dev'
> *主题:* Re: spark with label nodes in yarn
>
>
>
> Oops...
>
>
>
> I do use spark 1.5.0 and apache hadoop 2.6.0 (spark 1.4.1 + apache hadoop
> 2.6.0 is a typo), sorry
>
>
>
> Thanks,
>
> Allen
>
>
>
> *发件人:* Ted Yu [mailto:yuzhih...@gmail.com]
> *发送时间:* 2015年12月15日 22:59
> *收件人:* 张志强(旺轩)
> *抄送:* Saisai Shao; dev
> *主题:* Re: spark with label nodes in yarn
>
>
>
> Please upgrade to Spark 1.5.x
>
>
>
> 1.4.1 didn't support node label feature.
>
>
>
> Cheers
>
>
>
> On Tue, Dec 15, 2015 at 2:20 AM, 张志强(旺轩) <zzq98...@alibaba-inc.com> wrote:
>
> Hi SaiSai,
>
>
>
> OK, it make sense to me , what I need is just to schedule the executors,
> AND I leave one nodemanager at least with no any labels.
>
>
>
> It’s weird to me that YARN page shows my application is running, but
> actually it is still waiting for its executor
>
>
>
> See the attached.
>
>
>
> Thanks,
>
> Allen
>
>
>
> *发件人:* Saisai Shao [mailto:sai.sai.s...@gmail.com]
> *发送时间:* 2015年12月15日 18:07
> *收件人:* 张志强(旺轩)
> *抄送:* Ted Yu; dev
>
> *主题:* Re: spark with label nodes in yarn
>
>
>
> SPARK-6470 only supports node label expression for executors.
>
> SPARK-7173 supports node label expression for AM (will be in 1.6).
>
>
>
> If you want to schedule your whole application through label expression,
> you have to configure both am and executor label expression. If you only
> want to schedule executors through label expression, the executor
> configuration is enough, but you have to make sure your cluster has some
> nodes with no label.
>
>
>
> You can refer to this document (
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_yarn_resource_mgt/content/configuring_node_labels.html
> ).
>
>
>
> Thanks
>
> Saisai
>
>
>
>
>
> On Tue, Dec 15, 2015 at 5:55 PM, 张志强(旺轩) <zzq98...@alibaba-inc.com> wrote:
>
> Hi Ted,
>
>
>
> Thanks for your quick response, but I think the link you gave it to me is
> more advanced feature.
>
> Yes, I noticed SPARK-6470(https://issues.apache.org/jira/browse/SPARK-6470)
>
>
> And I just tried for this feature with spark 1.5.0, what happened to me
> was I was blocked to get the YARN containers by setting
> spark.yarn.executor.nodeLabelExpression property. My question,
> https://issues.apache.org/jira/browse/SPARK-7173 will fix this?
>
>
>
> Thanks
>
> Allen
>
>
>
>
>
> *发件人:* Ted Yu [mailto:yuzhih...@gmail.com]
> *发送时间:* 2015年12月15日 17:39
> *收件人:* 张志强(旺轩)
> *抄送:* dev@spark.apache.org
> *主题:* Re: spark with label nodes in yarn
>
>
>
> Please take a look at:
>
> https://issues.apache.org/jira/browse/SPARK-7173
>
>
>
> Cheers
>
>
> On Dec 15, 2015, at 1:23 AM, 张志强(旺轩) <zzq98...@alibaba-inc.com> wrote:
>
> Hi all,
>
>
>
> Has anyone tried label based scheduling via spark on yarn? I’ve tried
> that, it didn’t work, spark 1.4.1 + apache hadoop 2.6.0
>
>
>
> Any feedbacks are welcome.
>
>
>
> Thanks
>
> Allen
>
>
>
>
>

Reply via email to