Unless it's going away entirely, and I don't think it is, we at least have to do this to get off the fork of Hive that's being used now. I do think we want to keep Hive from getting into the core though -- see comments on PR.
On Tue, Jan 15, 2019 at 11:44 AM Xiao Li <gatorsm...@gmail.com> wrote: > > Hi, Yuming, > > Thank you for your contributions! The community aims at reducing the > dependence on Hive. Currently, most of Spark users are not using Hive. The > changes looks risky to me. > > To support Hadoop 3.x, we just need to resolve this JIRA: > https://issues.apache.org/jira/browse/HIVE-16391 > > Cheers, > > Xiao > > Yuming Wang <wgy...@gmail.com> 于2019年1月15日周二 上午8:41写道: >> >> Dear Spark Developers and Users, >> >> >> >> Hyukjin and I plan to upgrade the built-in Hive from 1.2.1-spark2 to 2.3.4 >> to solve some critical issues, such as support Hadoop 3.x, solve some ORC >> and Parquet issues. This is the list: >> >> Hive issues: >> >> [SPARK-26332][HIVE-10790] Spark sql write orc table on viewFS throws >> exception >> >> [SPARK-25193][HIVE-12505] insert overwrite doesn't throw exception when drop >> old data fails >> >> [SPARK-26437][HIVE-13083] Decimal data becomes bigint to query, unable to >> query >> >> [SPARK-25919][HIVE-11771] Date value corrupts when tables are >> "ParquetHiveSerDe" formatted and target table is Partitioned >> >> [SPARK-12014][HIVE-11100] Spark SQL query containing semicolon is broken in >> Beeline >> >> >> >> Spark issues: >> >> [SPARK-23534] Spark run on Hadoop 3.0.0 >> >> [SPARK-20202] Remove references to org.spark-project.hive >> >> [SPARK-18673] Dataframes doesn't work on Hadoop 3.x; Hive rejects Hadoop >> version >> >> [SPARK-24766] CreateHiveTableAsSelect and InsertIntoHiveDir won't generate >> decimal column stats in parquet >> >> >> >> >> >> Since the code for the hive-thriftserver module has changed too much for >> this upgrade, I split it into two PRs for easy review. >> >> The first PR does not contain the changes of hive-thriftserver. Please >> ignore the failed test in hive-thriftserver. >> >> The second PR is complete changes. >> >> >> >> I have created a Spark distribution for Apache Hadoop 2.7, you might >> download it via Google Drive or Baidu Pan. >> >> Please help review and test. Thanks. --------------------------------------------------------------------- To unsubscribe e-mail: dev-unsubscr...@spark.apache.org