Hi Ted

Can specify the core as follows for example 12 cores?:

  val conf = new SparkConf().
               setAppName("ImportStat").

*setMaster("local[12]").*
set("spark.driver.allowMultipleContexts", "true").
               set("spark.hadoop.validateOutputSpecs", "false")
  val sc = new SparkContext(conf)


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 30 March 2016 at 14:59, Ted Yu <yuzhih...@gmail.com> wrote:

> -c CORES, --cores CORES Total CPU cores to allow Spark applications to
> use on the machine (default: all available); only on worker
>
> bq. sc.getConf().set()
>
> I think you should use this pattern (shown in
> https://spark.apache.org/docs/latest/spark-standalone.html):
>
> val conf = new SparkConf()
>              .setMaster(...)
>              .setAppName(...)
>              .set("spark.cores.max", "1")val sc = new SparkContext(conf)
>
>
> On Wed, Mar 30, 2016 at 5:46 AM, vetal king <greenve...@gmail.com> wrote:
>
>> Hi all,
>>
>> While submitting Spark Job I am am specifying options --executor-cores 1
>> and --driver-cores 1. However, when the job was submitted, the job used all
>> available cores. So I tried to limit the cores within my main function
>>         sc.getConf().set("spark.cores.max", "1"); however it still used all
>> available cores
>>
>> I am using Spark in standalone mode (spark://<hostname>:7077)
>>
>> Any idea what I am missing?
>> Thanks in Advance,
>>
>> Shridhar
>>
>>
>

Reply via email to