I think your hive table using CompressionCodecName, but your
parquet-hadoop-bundle.jar in spark classpaths is not a correct version.
At 2018-12-21 12:07:22, "Jiaan Geng" wrote:
>I think your hive table using CompressionCodecName, but your
>parquet-hadoop-bundle.jar in spark classpaths is
I think your hive table using CompressionCodecName, but your
parquet-hadoop-bundle.jar in spark classpaths is not a correct version.
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To unsubscribe e-mail: us
Import parquet-hadoop-bundle jar. into the spark hive project When you
compress data using zstd, you may load it preferentially from the
parquet-hadoop-bundle, and you canundefinedt find the enum constant
parquet.hadoop.metadata.CompressionCodecName.ZSTD
>
> 18/12/20 10:35:28 ERROR Executor: Excep