.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:647)
at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:647)
at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.
Putting your code in a file I find the following on line 17:
stepAcc = new StepAccumulator();
However I don't think that was where the NPE was thrown.
Another thing I don't understand was that there were two addAccumulator()
calls at the top of stack trace while in your code I don'
The code was written in 1.4 but I am compiling it and running it with 1.3.
import it.unimi.dsi.fastutil.objects.Object2ObjectOpenHashMap;
import org.apache.spark.AccumulableParam;
import scala.Tuple4;
import thomsonreuters.trailblazer.operation.DriverCalc;
import thomsonreuters.trailblazer.operati
Can you show related code in DriverAccumulator.java ?
Which Spark release do you use ?
Cheers
On Mon, Aug 3, 2015 at 3:13 PM, Anubhav Agarwal wrote:
> Hi,
> I am trying to modify my code to use HDFS and multiple nodes. The code
> works fine when I run it locally in a single machine with a sing
Hi,
I am trying to modify my code to use HDFS and multiple nodes. The code
works fine when I run it locally in a single machine with a single worker.
I have been trying to modify it and I get the following error. Any hint
would be helpful.
java.lang.NullPointerException
at
thomsonreuters.