Hi.
I have a job that takes
~50min with Spark 0.9.3 and
~1.8hrs on Spark 1.3.1 on the same cluster.

The only code difference between the two code bases is to fix the Seq ->
Iter changes that happened in the Spark 1.x series.

Are there any other changes in the defaults from spark 0.9.3 -> 1.3.1 that
would cause such a large degradation in performance? Changes in
partitioning algorithms, scheduling etc?

shay

Reply via email to