gt;
Cc: user mailto:user@spark.apache.org>>
Subject: Re: Spark inserting into parquet files with different schema
What is the error you are getting? It would also be awesome if you could try
with Spark 1.5 when the first preview comes out (hopefully early next week).
On Mon, Aug 10, 2015 a
Date: Monday, August 10, 2015 at 2:36 PM
> To: Simeon Simeonov
> Cc: user
> Subject: Re: Spark inserting into parquet files with different schema
>
> Older versions of Spark (i.e. when it was still called SchemaRDD instead
> of DataFrame) did not support merging different parqu
,
Sim
From: Michael Armbrust mailto:mich...@databricks.com>>
Date: Monday, August 10, 2015 at 2:36 PM
To: Simeon Simeonov mailto:s...@swoop.com>>
Cc: user mailto:user@spark.apache.org>>
Subject: Re: Spark inserting into parquet files with different schema
Older versions of Spark
message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-inserting-into-parquet-files-with-different-schema-tp20706p24181.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---
Adam, did you find a solution for this?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-inserting-into-parquet-files-with-different-schema-tp20706p24181.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
The second SchemaRDD does in fact get inserted, but the extra column isn't
present when I try to query it with Spark SQL.
Is there anything I can do to get this working how I'm hoping?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-inserting-int
The second SchemaRDD does in fact get inserted, but the extra column isn't
present when I try to query it with Spark SQL.
Is there anything I can do to get this working how I'm hoping?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-inserting-int