Re: SparkR error when repartition is called

2016-08-09 Thread Felix Cheung
, August 9, 2016 12:19 AM Subject: Re: SparkR error when repartition is called To: Sun Rui mailto:sunrise_...@163.com>> Cc: User mailto:user@spark.apache.org>> Sun, I am using spark in yarn client mode in a 2-node cluster with hadoop-2.7.2. My R version is 3.3.1. I have the followi

Re: SparkR error when repartition is called

2016-08-09 Thread Shane Lee
Sun, I am using spark in yarn client mode in a 2-node cluster with hadoop-2.7.2. My R version is 3.3.1. I have the following in my spark-defaults.conf:spark.executor.extraJavaOptions =-XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryErrorspark.r.command=c:/R/R-3.3.1/bin/x64/Rscriptspark.ui.killEnab

Re: SparkR error when repartition is called

2016-08-08 Thread Sun Rui
I can’t reproduce your issue with len=1 in local mode. Could you give more environment information? > On Aug 9, 2016, at 11:35, Shane Lee wrote: > > Hi All, > > I am trying out SparkR 2.0 and have run into an issue with repartition. > > Here is the R code (essentially a port of the pi-calc