re than 1 workers per machine
>
>
> Sent from Samsung Mobile
>
>
> Original message
> From: Sandy Ryza
> Date:2015/06/10 21:31 (GMT+00:00)
> To: Evo Eftimov
> Cc: maxdml ,user@spark.apache.org
> Subject: Re: Determining number of executors within
Sent from Samsung Mobile
Original message
From: maxdml
Date:2015/06/10 19:56 (GMT+00:00)
To: user@spark.apache.org
Subject: Re: Determining number of executors within RDD
Actually this is somehow confusing for two reasons:
- First, the option 'spark.executor.instances',
>
>
> Sent from Samsung Mobile
>
>
> Original message
> From: maxdml
> Date:2015/06/10 19:56 (GMT+00:00)
> To: user@spark.apache.org
> Subject: Re: Determining number of executors within RDD
>
> Actually this is somehow confusing for two reasons
/executot
Sent from Samsung Mobile
Original message From: maxdml
Date:2015/06/10 19:56 (GMT+00:00)
To: user@spark.apache.org Subject: Re: Determining number
of executors within RDD
Actually this is somehow confusing for two reasons:
- First, the option
Actually this is somehow confusing for two reasons:
- First, the option 'spark.executor.instances', which seems to be only dealt
with in the case of YARN in the source code of SparkSubmit.scala, is also
present in the conf/spark-env.sh file under the standalone section, which
would indicate that i
Note that this property is only available for YARN
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Determining-number-of-executors-within-RDD-tp15554p23256.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---
Hi Akshat,
I assume what you want is to make sure the number of partitions in your RDD,
which is easily achievable by passing numSlices and minSplits argument at
the time of RDD creation. example :
val someRDD = sc.parallelize(someCollection, numSlices) /
val someRDD = sc.textFile(pathToFile, minS
You should try, from the SparkConf object, to issue a get.
I don't have the exact name for the matching key, but from reading the code
in SparkSubmit.scala, it should be something like:
conf.get("spark.executor.instances")
--
View this message in context:
http://apache-spark-user-list.1001560