The spark version I am using is spark 2.1.
On Thu, Mar 30, 2017 at 9:58 AM, shyla deshpande
wrote:
> Thanks
>
Thanks
rg.apache.spark.{SparkConf, SparkContext}
>
> /**
> * Created by sneha.shukla on 17/06/16.
> */
>
> object TestCode {
>
> def main(args: Array[String]): Unit = {
>
> val sparkConf = new
> SparkConf().setAppName("HBaseRead").setMaster("local")
&
Hi,
Any pointers? I'm not sure if this thread is reaching the right audience?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/numBins-property-not-honoured-in-BinaryClassificationMetrics-class-when-spark-default-parallelism-is1-tp27204p27269.html
Sent from
trics
import org.apache.spark.{SparkConf, SparkContext}
object TestCode {
def main(args: Array[String]): Unit = {
val sparkConf = new
SparkConf().setAppName("HBaseRead").setMaster("local")
sparkConf.set("spark.default.parallelism","
trics
import org.apache.spark.{SparkConf, SparkContext}
/**
* Created by sneha.shukla on 17/06/16.
*/
object TestCode {
def main(args: Array[String]): Unit = {
val sparkConf = new
SparkConf().setAppName("HBaseRead").setMaster("local")
sparkConf.set("
")
.intConf
.createWithDefault(200)
> On 20 May 2016, at 13:17, 喜之郎 <251922...@qq.com> wrote:
>
> Hi all.
> I set Spark.default.parallelism equals 20 in spark-default.conf. And send
> this file to all nodes.
> But I found reduce number is still default value,200.
> Does any
You need to use `spark.sql.shuffle.partitions`.
// maropu
On Fri, May 20, 2016 at 8:17 PM, 喜之郎 <251922...@qq.com> wrote:
> Hi all.
> I set Spark.default.parallelism equals 20 in spark-default.conf. And send
> this file to all nodes.
> But I found reduce number is still
Hi all.
I set Spark.default.parallelism equals 20 in spark-default.conf. And send this
file to all nodes.
But I found reduce number is still default value,200.
Does anyone else encouter this problem? can anyone give some advice?
[Stage 9
Hi,
I have a four single core machines as slaves in my cluster. I set the
spark.default.parallelism to 4 and ran SparkTC given in examples. It took
around 26 sec.
Now, I increased the spark.default.parallelism to 8, but my performance
deteriorates. The same application takes 32 sec now.
I have
Hi Grzegorz
From my understanding, for cogroup operation ( which used by
intersection), if spark.default.parallelism is not set by user, it won’t bother
to use the default value, it will use the partition number ( the max one among
all the rdds in cogroup operation) to build up a
Hi,
consider the following code:
import org.apache.spark.{SparkContext, SparkConf}
object ParallelismBug extends App {
var sConf = new SparkConf()
.setMaster("spark://hostName:7077") // .setMaster("local[4]")
.set("spark.default.parallelism", "
12 matches
Mail list logo