Thanks Sean, let me try to set spark.deploy.spreadOut  as  false.

On Sun, Jun 8, 2014 at 12:44 PM, Sean Owen <so...@cloudera.com> wrote:

> Have a look at:
>
> https://spark.apache.org/docs/1.0.0/job-scheduling.html
> https://spark.apache.org/docs/1.0.0/spark-standalone.html
>
> The default is to grab resource on all nodes. In your case you could set
> spark.cores.max to 2 or less to enable running two apps on a cluster of
> 4-core machines simultaneously.
>
> See also spark.deploy.defaultCores
>
> But you may really be after spark.deploy.spreadOut. if you make it false
> it will instead try to take all resource from a few nodes.
>  On Jun 8, 2014 1:55 AM, "Subacini B" <subac...@gmail.com> wrote:
>
>> Hi All,
>>
>> My cluster has 5 workers each having 4 cores (So total 20 cores).It is
>> in stand alone mode (not using Mesos or Yarn).I want two programs to run at
>> same time. So I have configured "spark.cores.max=3" , but when i run the
>> program it allocates three cores taking one core from each worker making 3
>> workers to run the program ,
>>
>> How to configure such that it takes 3 cores from 1 worker so that i can
>> use other workers for second program.
>>
>> Thanks in advance
>> Subacini
>>
>

Reply via email to