I build it with sbt package, I run it with sbt run, and I do use
SparkConf.set for deployment options and external jars. It seems that
spark-submit can't load extra jars and will lead to noclassdeffounderror,
should I pack all the jars to a giant one and give it a try?

I run it on a cluster of 8 machines, the test data consists of 1,000,000
vertices and edges are sparse. I use Graph.apply to build the graph, before
the build, I tested the vertices: RDD[(VertexId, VD)], edges: RDD[Edge[ED]
with count and first, the output looks fine.

I'm using ubuntu 12.04 and spark 1.0.1 with the serializable bug fixed, java
was installed with openjdk-7-jdk.

BTW, is there a chance that bagel can work fine?

Thanks!




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/the-pregel-operator-of-graphx-throws-NullPointerException-tp10865p10920.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to