Hi all
I have noticed that “Join” operator has been transferred to union and
groupByKey operator instead of cogroup operator in PySpark, this change
will probably generate more shuffle stage, for example
rdd1 = sc.makeRDD(...).partitionBy(2)
rdd2 = sc.makeRDD(...).partitionBy(2)
rdd3 = rdd1.join(rdd2).collect()
Above code implemented with scala will generate 2 shuffle, but will
generate 3 shuffle with python. what is initial design motivation of join
operator in PySpark? Any idea to improve join performance in PySpark?
Andrew