Hello,

Has anyone got any ideas? I am not quite sure if my problem is an exact fit
for Spark. Since in reality in this
section of my program i am not really doing a reduce job simply a group by
and partition. 

Would calling pipe on the Partiotined JavaRDD do the trick? Are there any
examples using pipe?

Thanks
Dimitri







--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Java-example-using-external-Jars-tp2647p3092.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to