, August 9, 2016 12:19 AM
Subject: Re: SparkR error when repartition is called
To: Sun Rui mailto:sunrise_...@163.com>>
Cc: User mailto:user@spark.apache.org>>
Sun,
I am using spark in yarn client mode in a 2-node cluster with hadoop-2.7.2. My
R version is 3.3.1.
I have the followi
Sun,
I am using spark in yarn client mode in a 2-node cluster with hadoop-2.7.2. My
R version is 3.3.1.
I have the following in my spark-defaults.conf:spark.executor.extraJavaOptions
=-XX:+PrintGCDetails
-XX:+HeapDumpOnOutOfMemoryErrorspark.r.command=c:/R/R-3.3.1/bin/x64/Rscriptspark.ui.killEnab
I can’t reproduce your issue with len=1 in local mode.
Could you give more environment information?
> On Aug 9, 2016, at 11:35, Shane Lee wrote:
>
> Hi All,
>
> I am trying out SparkR 2.0 and have run into an issue with repartition.
>
> Here is the R code (essentially a port of the pi-calc