Could you show your spark version ?

And the value of `spark.default.parallelism` you are setting?

Best Regards,

Yi Tian
tianyi.asiai...@gmail.com




On Oct 20, 2014, at 12:38, Kevin Jung <itsjb.j...@samsung.com> wrote:

> Hi,
> I usually use file on hdfs to make PairRDD and analyze it by using
> combineByKey,reduceByKey, etc.
> But sometimes it hangs when I set spark.default.parallelism configuration,
> though the size of file is small.
> If I remove this configuration, all works fine.
> Does anyone tell me why this occur?
> 
> Regards,
> Kevin
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/default-parallelism-bug-tp16787.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to