I am having GC issue with default value of spark.sql.shuffle.partitions
(200). When I increase it to 2000, shuffle join works fine.

I want to use different values for spark.sql.shuffle.partitions depending on
data volume, for different queries which are fired from sane SparkSql
context.

Thanks
Tridib



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-control-spark-sql-shuffle-partitions-per-query-tp24781.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to