+1, binding.
Verified the signature, license, checksum and tests. Also ran AWS
integration tests and all passed.
-Jack

On Tue, Jan 26, 2021 at 9:56 AM Daniel Weeks <dwe...@apache.org> wrote:

> +1 (binding)
>
> Verified signatures, checksums, license, build and tests.
>
> On Tue, Jan 26, 2021 at 9:11 AM Ryan Blue <rb...@netflix.com.invalid>
> wrote:
>
>> OpenInx, we can update documentation any time. It should not block the
>> release, just open a PR and we'll merge it and push docs after that.
>>
>> On Mon, Jan 25, 2021 at 9:01 PM OpenInx <open...@gmail.com> wrote:
>>
>>> Hi dev
>>>
>>> I'd like to include this patch in release 0.11.0  because it's the
>>> document of new flink features.  I'm sorry that I did not update the
>>> flink's document in time when the feature code merged, but I think it's
>>> worth it to merge this document PR when we release iceberg 0.11.0, that
>>> helps a lot for users who want to use those new features, such as streaming
>>> reader,  rewrite data files action,  write distribution to cluster data.
>>> (I will update those documents in time so that we won't rollback the next
>>> release).
>>>
>>> Thanks.
>>>
>>>
>>> On Tue, Jan 26, 2021 at 10:17 AM Anton Okolnychyi
>>> <aokolnyc...@apple.com.invalid> wrote:
>>>
>>>> +1 (binding)
>>>>
>>>> I did local tests with Spark 3.0.1. I think we should also note the
>>>> support for DELETE FROM and MERGE INTO in Spark is experimental.
>>>>
>>>> Thanks,
>>>> Anton
>>>>
>>>> On 22 Jan 2021, at 15:26, Jack Ye <yezhao...@gmail.com> wrote:
>>>>
>>>> Hi everyone,
>>>>
>>>> I propose the following RC to be released as the official Apache
>>>> Iceberg 0.11.0 release. The RC is also reviewed and signed by Ryan Blue.
>>>>
>>>> The commit id is ad78cc6cf259b7a0c66ab5de6675cc005febd939
>>>>
>>>> This corresponds to the tag: apache-iceberg-0.11.0-rc0
>>>> * https://github.com/apache/iceberg/commits/apache-iceberg-0.11.0-rc0
>>>> * https://github.com/apache/iceberg/tree/apache-iceberg-0.11.0-rc0
>>>>
>>>> The release tarball, signature, and checksums are here:
>>>> *
>>>> https://dist.apache.org/repos/dist/dev/iceberg/apache-iceberg-0.11.0-rc0
>>>>
>>>> You can find the KEYS file here:
>>>> * https://dist.apache.org/repos/dist/dev/iceberg/KEYS
>>>>
>>>> Convenience binary artifacts are staged in Nexus. The Maven repository
>>>> URL is:
>>>> *
>>>> https://repository.apache.org/content/repositories/orgapacheiceberg-1015
>>>>
>>>> This release includes the following changes:
>>>>
>>>> *High-level features*
>>>>
>>>>    - Core API now supports partition spec and sort order evolution
>>>>    - Spark 3 now supports the following SQL extensions:
>>>>       - MERGE INTO
>>>>       - DELETE FROM
>>>>       - ALTER TABLE ... ADD/DROP PARTITION
>>>>       - ALTER TABLE ... WRITE ORDERED BY
>>>>       - invoke stored procedures using CALL
>>>>    - Flink now supports streaming reads, CDC writes (experimental),
>>>>    and filter pushdown
>>>>    - AWS module is added to support better integration with AWS, with AWS
>>>>    Glue catalog <https://aws.amazon.com/glue> support and dedicated S3
>>>>    FileIO implementation
>>>>    - Nessie module is added to support integration with project Nessie
>>>>    <https://projectnessie.org/>
>>>>
>>>> *Important bug fixes*
>>>>
>>>>    - #1981 fixes date and timestamp transforms
>>>>    - #2091 fixes Parquet vectorized reads when column types are
>>>>    promoted
>>>>    - #1962 fixes Parquet vectorized position reader
>>>>    - #1991 fixes Avro schema conversions to preserve field docs
>>>>    - #1811 makes refreshing Spark cache optional
>>>>    - #1798 fixes read failure when encountering duplicate entries of
>>>>    data files
>>>>    - #1785 fixes invalidation of metadata tables in CachingCatalog
>>>>    - #1784 fixes resolving of SparkSession table's metadata tables
>>>>
>>>> *Other notable changes*
>>>>
>>>>    - NaN counter is added to format v2 metrics
>>>>    - Shared catalog properties are added in core library to
>>>>    standardize catalog level configurations
>>>>    - Spark and Flink now supports dynamically loading customized
>>>>    `Catalog` and `FileIO` implementations
>>>>    - Spark now supports loading tables with file paths via HadoopTables
>>>>    - Spark 2 now supports loading tables from other catalogs, like
>>>>    Spark 3
>>>>    - Spark 3 now supports catalog names in DataFrameReader when using
>>>>    Iceberg as a format
>>>>    - Hive now supports INSERT INTO, case insensitive query, projection
>>>>    pushdown, create DDL with schema and auto type conversion
>>>>    - ORC now supports reading tinyint, smallint, char, varchar types
>>>>    - Hadoop catalog now supports role-based access of table listing
>>>>
>>>> Please download, verify, and test.
>>>>
>>>> Please vote in the next 72 hours.
>>>>
>>>> [ ] +1 Release this as Apache Iceberg 0.11.0
>>>> [ ] +0
>>>> [ ] -1 Do not release this because...
>>>>
>>>>
>>>>
>>
>> --
>> Ryan Blue
>> Software Engineer
>> Netflix
>>
>

Reply via email to