Hi,

I saw that spark has an option to adapt the join and shuffle configuration.
For example: "spark.sql.adaptive.shuffle.targetPostShuffleInputSize"

I wanted to know if you had an experience with such configuration, how it
changed the performance?

Another question is whether along Spark SQL query execution, there is an
option to dynamically change the shuffle partition config?

Thanks,
Tzahi

Reply via email to