-in-the-loop-runtime-grows-tp25303p26069.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
educeByKey(min)\
.map(lambda (l,n):l).cache()
L.collect()
Does somebody has explanation for that?
I run spark 1.5.0. with seven workers and pyspark
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/cartesian-in-the-loo
Hi All,
I have problem with cartesian product. I build cartesian of RDDs in the
loop and update one of the variables in the iteration. At the end of the
iteration the variable is squeezed to its original size. Therefore, I
expect same running time for each iteration, because result of cartesian
pro