>From spark-submit --help:

 YARN-only:
  --executor-cores NUM        Number of cores per executor (Default: 1).
  --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
  --num-executors NUM         Number of executors to launch (Default: 2).
  --archives ARCHIVES         Comma separated list of archives to be
extracted into the
                              working directory of each executor.

On Thu, Sep 25, 2014 at 2:20 PM, Tamas Jambor <jambo...@gmail.com> wrote:
> Thank you.
>
> Where is the number of containers set?
>
> On Thu, Sep 25, 2014 at 7:17 PM, Marcelo Vanzin <van...@cloudera.com> wrote:
>> On Thu, Sep 25, 2014 at 8:55 AM, jamborta <jambo...@gmail.com> wrote:
>>> I am running spark with the default settings in yarn client mode. For some
>>> reason yarn always allocates three containers to the application (wondering
>>> where it is set?), and only uses two of them.
>>
>> The default number of executors in Yarn mode is 2; so you have 2
>> executors + the application master, so 3 containers.
>>
>>> Also the cpus on the cluster never go over 50%, I turned off the fair
>>> scheduler and set high spark.cores.max. Is there some additional settings I
>>> am missing?
>>
>> You probably need to request more cores (--executor-cores). Don't
>> remember if that is respected in Yarn, but should be.
>>
>> --
>> Marcelo



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to