Thanks a lot Saisai and Zhan, I see DefaultResourceCalculator currently
being used for Capacity scheduler. We will change it to
DominantResourceCalculator.
Thanks,
Sivakumar Bhavanari.
On Mon, Dec 21, 2015 at 5:56 PM, Zhan Zhang wrote:
> BTW: It is not only a Yarn-webui issue. In capacity sched
BTW: It is not only a Yarn-webui issue. In capacity scheduler, vcore is
ignored. If you want Yarn to honor vcore requests, you have to use
DominantResourceCalculator as Saisai suggested.
Thanks.
Zhan Zhang
On Dec 21, 2015, at 5:30 PM, Saisai Shao
mailto:sai.sai.s...@gmail.com>> wrote:
and y
I guess you're using DefaultResourceCalculator for capacity scheduler, can
you please check you capacity scheduler configuration?
By default, this resource calculator will only honor memory as resource
calculator, so vcores will always show 1 not matter what values you set
(but Spark internally go
Hi Saisai,
Total Vcores available in yarn applications web UI (runs on 8088) before
and after only varies with number of executors + driver core. If I give 10
executors, I see only 11 vcores being used in yarn application web UI.
Thanks,
Sivakumar Bhavanari.
On Mon, Dec 21, 2015 at 5:21 PM, Sais
Hi Siva,
How did you know that --executor-cores is ignored and where did you see
that only 1 Vcore is allocated?
Thanks
Saisai
On Tue, Dec 22, 2015 at 9:08 AM, Siva wrote:
> Hi Everyone,
>
> Observing a strange problem while submitting spark streaming job in
> yarn-cluster mode through spark-s