I see, thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SchemaRDD-s-saveAsParquetFile-throws-java-lang-IncompatibleClassChangeError-tp6837p6848.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
All of that support code uses Hadoop-related classes, like
OutputFormat, to do the writing to Parquet format. There's a Hadoop
code dependency in play here even if the bytes aren't going to HDFS.
On Tue, Jun 3, 2014 at 10:10 PM, k.tham wrote:
> I've read through that thread, and it seems for him,
I've read through that thread, and it seems for him, he needed to add a
particular hadoop-client dependency.
However, I don't think I should be required to do that as I'm not reading
from HDFS.
I'm just running a straight up minimal example, in local mode, and out of
the box.
Here's an example m
Oh, I missed that thread. Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SchemaRDD-s-saveAsParquetFile-throws-java-lang-IncompatibleClassChangeError-tp6837p6839.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
This thread seems to be about the same issue:
https://www.mail-archive.com/user@spark.apache.org/msg04403.html
On Tue, Jun 3, 2014 at 12:25 PM, k.tham wrote:
> I'm trying to save an RDD as a parquet file through the
> saveAsParquestFile()
> api,
>
> With code that looks something like:
>
> val