Too many GC.

The task runs much more faster with more memory(heap space). The CPU load is
still too high, and network load is about 20+MB/s(not high enough)

So what is the correct way to solve this GC problem? Is there other ways
except using more memory?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-1-SparkSQL-reduce-stage-of-shuffle-is-slow-tp10765p10922.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to