(class org.apache.spark.sql.execution.ConvertToSafe,
> ConvertToSafe
> +- Scan ParquetRelation[_1#0] InputPaths:
> hdfs://CRUX2-SETUP:9000/data/testdir/data1.parquet
> )
> - field (class: org.apache.spark.sql.execution.ConvertToSafe$$
> anonfun$2,
> name: $outer, type: class org.
eaner.scala:301)
... 78 more
Please help!
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Task-not-Serializable-Exception-tp20417.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
---
I am getting this spark not serializable exception when running spark submit in
standalone mode. I am trying to use spark streaming which gets its stream from
kafka queues.. but it is not able to process the mapping actions on the RDDs
from the stream ..the code where the serialization exception