SparkConf().set("spark.serializer",
"org.apache.spark.serializer.KryoSerializer")
val sc = new SparkContext(conf)
This worked for me.
Regards,
Anish
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-io-NotSerializableException-or
While submitting the job, you can use --jars, --driver-classpath etc
configurations to add the jar. Apart from that if you are running the
job as a standalone application, then you can use the sc.addJar option
to add the jar (which will ship this jar into all the executors)
Regards,
Anish
On 8
de in the case of numPartitions = 1). To
avoid this, you can pass shuffle = true. This will add a shuffle step,
but means the current upstream partitions will be executed in parallel
(per whatever the current partitioning is).
Regards,
anish
On 8/14/15, Alexander Pivovarov wrote:
> Hi
SparkConf().set("spark.serializer",
"org.apache.spark.serializer.KryoSerializer")
val sc = new SparkContext(conf)
This worked for me.
Regards,
Anish
messages.
Please suggest.
TIA
--
Anish Sneh
"Experience is the best teacher."
+91-99718-55883
http://in.linkedin.com/in/anishsneh