amp;& finalStage.parents.isEmpty &&
partitions.length == 1
I'm wondering whether by default "running locally" is disabled.
Yan
From: Du Li [mailto:l...@yahoo-inc.com.INVALID]
Sent: Tuesday, September 16, 2014 12:26 PM
To: Matei Zaharia
Cc: u...@spark.apache.org; dev@spark.apache.or
Hi,
The test case is separated out as follows. The call to rdd2.first() breaks when
spark version is changed to 1.1.0, reporting exception NullWritable not
serializable. However, the same test passed with spark 1.0.2. The pom.xml file
is attached. The test data README.md was copied from spark
To: Du Li , "u...@spark.apache.org"
, "dev@spark.apache.org"
Subject: Re: NullWritable not serializable
Hi Du,
I don't think NullWritable has ever been serializable, so you must be doing
something differently from your previous program. In this case though, just use
a map(
Hi Du,
I don't think NullWritable has ever been serializable, so you must be doing
something differently from your previous program. In this case though, just use
a map() to turn your Writables to serializable types (e.g. null and String).
Matie
On September 12, 2014 at 8:48:36 PM, Du Li (l...
Hi,
I was trying the following on spark-shell (built with apache master and hadoop
2.4.0). Both calling rdd2.collect and calling rdd3.collect threw
java.io.NotSerializableException: org.apache.hadoop.io.NullWritable.
I got the same problem in similar code of my app which uses the newly released