Hi,
I have a simple accumulator that needs to be passed to a foo() function
inside a map job:
val myCounter = sc.accumulator(0)
val myRDD = sc.textFile(inputpath) // :spark.RDD[String]
myRDD.flatMap(line => foo(line))
def foo(line: String) = {
myCounter += 1 // line throwing NullPointerException
// compute something on the input
}
If I pass myCounter as a parameter to foo(), then I get : noSerializable
exception for SparkContext. If I keep myCounter global (in a scala singleton
object), I get nullpointer exception.
What is the proper way send an accumulator distributed to all nodes when
running on a cluster?
Thanks,
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-when-using-Accumulators-on-cluster-tp17261.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]