Thanks everyone who participated in the vote for Release Apache Iceberg 0.13.1 RC0.
The vote result is: +1: 4 (binding), 1 (non-binding) +0: 0 (binding), 0 (non-binding) -1: 0 (binding), 0 (non-binding) Therefore, the release candidate is passed. From: Russell Spitzer <russell.spit...@gmail.com> Reply-To: "dev@iceberg.apache.org" <dev@iceberg.apache.org> Date: Monday, February 14, 2022 at 1:49 PM To: Iceberg Dev List <dev@iceberg.apache.org> Subject: RE: [EXTERNAL] [VOTE] Release Apache Iceberg 0.13.1 RC0 CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe. +1 (binding) Checked sigs, sums license, static analysis and build. On Mon, Feb 14, 2022 at 3:32 PM Daniel Weeks <daniel.c.we...@gmail.com<mailto:daniel.c.we...@gmail.com>> wrote: +1 (binding) Validated sigs/sums/license/build/tests. -Dan On Mon, Feb 14, 2022 at 12:41 PM Kyle Bendickson <k...@tabular.io<mailto:k...@tabular.io>> wrote: +1 (non-binding) License checks, various smoke tests for create table, update, merge into, deletes, etc against Java 11 and Spark 3.2 and 3.1. - Kyle Bendickson On Mon, Feb 14, 2022 at 12:32 PM Ryan Blue <b...@tabular.io<mailto:b...@tabular.io>> wrote: +1 (binding) * Ran license checks, verified checksum and signature * Built the project Thanks, Amogh and Jack for managing this release! On Sun, Feb 13, 2022 at 10:22 PM Jack Ye <yezhao...@gmail.com<mailto:yezhao...@gmail.com>> wrote: +1 (binding) verified signature, checksum, license. The checksum was generated using the old buggy release script because it was executed in the 0.13.x branch so it still used the full file path. I have updated it to use the relative file path. In case anyone sees checksum failure, please re-download the checksum file and verify again. Ran unit tests for all engine versions and JDK versions, AWS Integration tests. For the Spark flaky test, given #4033 fixes the issue and it was not a bug of the source code, I think we can continue without re-cut a candidate. Tested basic operations, copy-on-write delete, update and rewrite data files on AWS EMR Spark 3.1 Flink 1.14 and verified fixes #3986 and #4024. I did some basic tests for #4023 (the predicate pushdown fix) but I don't have a large Spark 3.2 installation to further verify the performance. It would be great if anyone else could do some additional verifications. Best, Jack Ye On Fri, Feb 11, 2022 at 8:24 PM Manong Karl <abc549...@gmail.com<mailto:abc549...@gmail.com>> wrote: It's flaky. This exception is only found in one agent of TeamCity. Changing agents will resolve the issue. Ryan Blue <b...@tabular.io<mailto:b...@tabular.io>> 于2022年2月12日周六 08:57写道: Does that exception fail consistently, or is it a flaky test? We recently fixed another Spark test that was flaky because of sampling and sort order: https://github.com/apache/iceberg/pull/4033 On Thu, Feb 10, 2022 at 7:12 PM Manong Karl <abc549...@gmail.com<mailto:abc549...@gmail.com>> wrote: I got an issue failed on spark 3.2 TestMergeOnReadDelete.testDeleteWithSerializableIsolation[catalogName = testhive, implementation = org.apache.iceberg.spark.SparkCatalog, config = {type=hive, default-namespace=default}, format = orc, vectorized = true, distributionMode = none] · Issue #4090 · apache/iceberg (github.com)<https://github.com/apache/iceberg/issues/4090>. Is it just my exception? -- Ryan Blue Tabular -- Ryan Blue Tabular