esday, June 13, 2017 1:54 AM
To: Angel Francisco Orta
Cc: Yong Zhang; user@spark.apache.org
Subject: Re: Parquet file generated by Spark, but not compatible read by Hive
Try setting following Param:
conf.set("spark.sql.hive.convertMetastoreParquet","false")
On Tue, Jun 1
Try setting following Param:
conf.set("spark.sql.hive.convertMetastoreParquet","false")
On Tue, Jun 13, 2017 at 3:34 PM, Angel Francisco Orta <
angel.francisco.o...@gmail.com> wrote:
> Hello,
>
> Do you use df.write or you make with hivecontext.sql(" insert into ...")?
>
> Angel.
>
> El 12 jun.
Hello,
Do you use df.write or you make with hivecontext.sql(" insert into ...")?
Angel.
El 12 jun. 2017 11:07 p. m., "Yong Zhang" escribió:
> We are using Spark *1.6.2* as ETL to generate parquet file for one
> dataset, and partitioned by "brand" (which is a string to represent brand
> in this
We are using Spark 1.6.2 as ETL to generate parquet file for one dataset, and
partitioned by "brand" (which is a string to represent brand in this dataset).
After the partition files generated in HDFS like "brand=a" folder, we add the
partitions in the Hive.
The hive version is 1.2.1 (In fact