Hi all,

I understand that parquet allows for schema versioning automatically in the
format; however, I'm not sure whether Spark supports this.

I'm saving a SchemaRDD to a parquet file, registering it as a table, then
doing an insertInto with a SchemaRDD with an extra column.

The second SchemaRDD does in fact get inserted, but the extra column isn't
present when I try to query it with Spark SQL.

Is there anything I can do to get this working how I'm hoping?

Reply via email to