Hi everyone,

I propose the following RC to be released as the official Apache Iceberg
0.11.0 release. The RC is also reviewed and signed by Ryan Blue.

The commit id is ad78cc6cf259b7a0c66ab5de6675cc005febd939

This corresponds to the tag: apache-iceberg-0.11.0-rc0
* https://github.com/apache/iceberg/commits/apache-iceberg-0.11.0-rc0
* https://github.com/apache/iceberg/tree/apache-iceberg-0.11.0-rc0

The release tarball, signature, and checksums are here:
* https://dist.apache.org/repos/dist/dev/iceberg/apache-iceberg-0.11.0-rc0

You can find the KEYS file here:
* https://dist.apache.org/repos/dist/dev/iceberg/KEYS

Convenience binary artifacts are staged in Nexus. The Maven repository URL
is:
* https://repository.apache.org/content/repositories/orgapacheiceberg-1015

This release includes the following changes:

*High-level features*

   - Core API now supports partition spec and sort order evolution
   - Spark 3 now supports the following SQL extensions:
      - MERGE INTO
      - DELETE FROM
      - ALTER TABLE ... ADD/DROP PARTITION
      - ALTER TABLE ... WRITE ORDERED BY
      - invoke stored procedures using CALL
   - Flink now supports streaming reads, CDC writes (experimental), and
   filter pushdown
   - AWS module is added to support better integration with AWS, with AWS
   Glue catalog <https://aws.amazon.com/glue> support and dedicated S3
   FileIO implementation
   - Nessie module is added to support integration with project Nessie
   <https://projectnessie.org>

*Important bug fixes*

   - #1981 fixes date and timestamp transforms
   - #2091 fixes Parquet vectorized reads when column types are promoted
   - #1962 fixes Parquet vectorized position reader
   - #1991 fixes Avro schema conversions to preserve field docs
   - #1811 makes refreshing Spark cache optional
   - #1798 fixes read failure when encountering duplicate entries of data
   files
   - #1785 fixes invalidation of metadata tables in CachingCatalog
   - #1784 fixes resolving of SparkSession table's metadata tables

*Other notable changes*

   - NaN counter is added to format v2 metrics
   - Shared catalog properties are added in core library to standardize
   catalog level configurations
   - Spark and Flink now supports dynamically loading customized `Catalog`
   and `FileIO` implementations
   - Spark now supports loading tables with file paths via HadoopTables
   - Spark 2 now supports loading tables from other catalogs, like Spark 3
   - Spark 3 now supports catalog names in DataFrameReader when using
   Iceberg as a format
   - Hive now supports INSERT INTO, case insensitive query, projection
   pushdown, create DDL with schema and auto type conversion
   - ORC now supports reading tinyint, smallint, char, varchar types
   - Hadoop catalog now supports role-based access of table listing

Please download, verify, and test.

Please vote in the next 72 hours.

[ ] +1 Release this as Apache Iceberg 0.11.0
[ ] +0
[ ] -1 Do not release this because...

Reply via email to