Thanks everyone. I am closing the vote and will publish the voting result
in a separate email.

On Thu, Sep 11, 2025 at 2:09 AM Fokko Driesprong <fo...@apache.org> wrote:

> +1 from my end as well.
>
> For added-rows, I noticed that #14048
> <https://github.com/apache/iceberg/pull/140408> makes it required, while
> the code for the SnapshotParser allows it to be omitted
> <https://github.com/apache/iceberg/blob/720ef99720a1c59e4670db983c951243dffc4f3e/core/src/main/java/org/apache/iceberg/SnapshotParser.java#L174-L175>.
>  But
> I don't think we want to check for the table version there.
>
> - Checked signatures and checksums
> - Ran license checks
> - Did some tests against PyIceberg
>
> Kind regards,
> Fokko
>
> Op do 11 sep 2025 om 09:12 schreef Christian Thiel <
> christian.t.b...@gmail.com>:
>
>> If we bring back `added-rows`, I am also +1 (non-binding) for this
>> release.
>>
>> On Wed, 10 Sept 2025 at 22:43, Ryan Blue <rdb...@gmail.com> wrote:
>>
>>> +1
>>>
>>> * Validated signature and checksum
>>> * Ran license checks
>>> * Verified that the convenience binary works in Java 11
>>>
>>> On Wed, Sep 10, 2025 at 2:20 PM Ryan Blue <rdb...@gmail.com> wrote:
>>>
>>>> I think we should continue to use `added-rows` as well. We can update
>>>> the spec to explain that it should be the number of rows that will be
>>>> assigned IDs. It would be nice to have a slightly better name, but I don't
>>>> think it is worth the breaking change.
>>>>
>>>> On Wed, Sep 10, 2025 at 1:22 PM Steven Wu <stevenz...@gmail.com> wrote:
>>>>
>>>>> Thanks, Russel!
>>>>>
>>>>> Since we also have 1.8 and 1.9 using the `added-rows` field, we
>>>>> probably just want to bring back the same field `added-rows` as it is. In
>>>>> the spec, we can clarify that it is ONLY used for incrementing the
>>>>> `next-row-id` in the table metadata. It shouldn't be used as the counter
>>>>> for the actual number of added rows, as the number can include added rows
>>>>> and some existing rows.
>>>>>
>>>>> Maybe in V4, we can consider changing it to `assigned-rows` to reflect
>>>>> its true purpose and the spec description.
>>>>>
>>>>> In summary, we can bring back `added-rows` as a snapshot field in the
>>>>> spec. There won't be any behavior change in 1.10 compared to 1.8 or 1.9. 
>>>>> We
>>>>> can proceed with the 1.10.0 release. Any concerns?
>>>>>
>>>>> On Wed, Sep 10, 2025 at 12:46 PM Russell Spitzer <
>>>>> russell.spit...@gmail.com> wrote:
>>>>>
>>>>>> As long as we don't change the name we are good for 1.10, if we want
>>>>>> to change the name we will need to patch that first imho. I think we just
>>>>>> need to doc that "added-rows" is just directly related to row-lineage in
>>>>>> the spec and note that it needs to be at minimum the number of added-rows
>>>>>> in the snapshot but can be larger with our default recommendation being 
>>>>>> to
>>>>>> just add all of the added and existing rows in all added manifest files.
>>>>>>
>>>>>> On Wed, Sep 10, 2025 at 12:37 PM Steven Wu <stevenz...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Adding the information back seems to be the right thing to do here.
>>>>>>> We can start a separate thread on how to move forward properly, as it is
>>>>>>> probably more complicated than just adding the field back in the spec.
>>>>>>> E.g., we may want to use a different field name like `assigned-rows` to
>>>>>>> reflect the spec language, as it includes both added rows and existing 
>>>>>>> rows
>>>>>>> in the *new/added* manifest files in the snapshot. Snapshot JSON
>>>>>>> parser can read both old `added-rows` and new `assigned-rows` fields for
>>>>>>> backward compatibility.
>>>>>>>
>>>>>>> With the direction of adding the field back in the spec, I feel this
>>>>>>> issue shouldn't be a blocker for 1.10.0 release. Any concerns?
>>>>>>>
>>>>>>> On Wed, Sep 10, 2025 at 10:16 AM Christian Thiel <
>>>>>>> christian.t.b...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Quick summary of the discussion in the Catalog Sync today:
>>>>>>>> We had a broad consensus that removing the "added-rows" field was a
>>>>>>>> mistake. Especially for the REST API, it is required for correct 
>>>>>>>> behaviour,
>>>>>>>> and it would be unfortunate to deviate the REST Object from the spec 
>>>>>>>> object
>>>>>>>> too much. As a result, it makes sense to revert the change in
>>>>>>>> https://github.com/apache/iceberg/pull/12781 and add "added-rows"
>>>>>>>> back as a field to the Snapshot.
>>>>>>>>
>>>>>>>> There has been discussion around whether this field should be
>>>>>>>> optional or not. If there are currently no V3 Tables out there that 
>>>>>>>> don't
>>>>>>>> have this field, it would probably be best to add it as required.
>>>>>>>> If anyone is aware of a tool creating v3 tables already without
>>>>>>>> this field, please let us know here. Iceberg Java does write the
>>>>>>>> "added-rows" field to this date, even though its temporarily missing 
>>>>>>>> from
>>>>>>>> the spec ;)
>>>>>>>> Tables created with the java sdk, are thus compatible with the
>>>>>>>> planned change.
>>>>>>>>
>>>>>>>> On Wed, 10 Sept 2025 at 16:26, Russell Spitzer <
>>>>>>>> russell.spit...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> I think ... we would just add added-rows back into the snapshot to
>>>>>>>>> fix this then? Otherwise we would have to require catalogs to compute
>>>>>>>>> added rows by reading the manifestList.
>>>>>>>>>
>>>>>>>>>  I think we forgot there could be a snapshot that would be added
>>>>>>>>> to the base metadata via a REST serialization
>>>>>>>>> and not directly programmatically from other parts of the code
>>>>>>>>> base. The change
>>>>>>>>> was initially made because the "calculation" for this property was
>>>>>>>>> being done in the
>>>>>>>>> snapshot producer anyway so we no longer needed the value to be
>>>>>>>>> passed
>>>>>>>>> through some other means. The code path in SnapshotParser was
>>>>>>>>> effectively being bypassed.
>>>>>>>>>
>>>>>>>>> On Wed, Sep 10, 2025 at 6:23 AM Christian Thiel <
>>>>>>>>> christian.t.b...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> -1 (non-binding)
>>>>>>>>>>
>>>>>>>>>> Dear all, I think I have found a blocker for this RC.
>>>>>>>>>>
>>>>>>>>>> In https://github.com/apache/iceberg/pull/12781 we removed
>>>>>>>>>> the "added-rows" fields from snapshots. However in Java, we have not 
>>>>>>>>>> made
>>>>>>>>>> this change.
>>>>>>>>>> The field is still serialized, which is also tested in `
>>>>>>>>>> testJsonConversionWithRowLineage`. This is the first thing we
>>>>>>>>>> should fix.
>>>>>>>>>>
>>>>>>>>>> Secondly, removing the field from the serialization would break
>>>>>>>>>> the REST Spec for v3 tables. The Catalog needs to know how many rows 
>>>>>>>>>> have
>>>>>>>>>> been added to update the `next-row-id` of the TableMetadata without 
>>>>>>>>>> reading
>>>>>>>>>> the Manifest Lists. We have similar information available in the 
>>>>>>>>>> Snapshot
>>>>>>>>>> summary, but I don't think using snapshot summary information to 
>>>>>>>>>> update
>>>>>>>>>> next-row-id has been discussed before.
>>>>>>>>>>
>>>>>>>>>> I hope we can pick up the second point in the catalog sync today.
>>>>>>>>>>
>>>>>>>>>> On Tue, 9 Sept 2025 at 18:31, Steve <hongyue.apa...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> +1 (non-binding)
>>>>>>>>>>> Verified signatures and checksums, RAT checks and build locally
>>>>>>>>>>> with JDK17
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Sep 8, 2025 at 2:33 PM Drew <img...@gmail.com> wrote:
>>>>>>>>>>> >
>>>>>>>>>>> > +1 (non binding)
>>>>>>>>>>> >
>>>>>>>>>>> > verified signature and checksums
>>>>>>>>>>> > verified RAT license check
>>>>>>>>>>> > verified build/tests passing
>>>>>>>>>>> > ran some manual tests with GlueCatalog
>>>>>>>>>>> >
>>>>>>>>>>> > - Drew
>>>>>>>>>>> >
>>>>>>>>>>> >
>>>>>>>>>>> > On Mon, Sep 8, 2025 at 7:54 AM Jacky Lee <qcsd2...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>> >>
>>>>>>>>>>> >> +1 (non-binding)
>>>>>>>>>>> >>
>>>>>>>>>>> >> Built and tested Spark 4.0.1 and Flink 2.0 on JDK17,
>>>>>>>>>>> including unit
>>>>>>>>>>> >> tests, basic insert/read operations, and metadata validation.
>>>>>>>>>>> >>
>>>>>>>>>>> >> Thanks,
>>>>>>>>>>> >> Jacky Lee
>>>>>>>>>>> >>
>>>>>>>>>>> >> Renjie Liu <liurenjie2...@gmail.com> 于2025年9月8日周一 16:23写道:
>>>>>>>>>>> >> >
>>>>>>>>>>> >> > +1 (binding)
>>>>>>>>>>> >> >
>>>>>>>>>>> >> > Ran following check and tests:
>>>>>>>>>>> >> > 1. Verified checksum
>>>>>>>>>>> >> > 2. Verified signature
>>>>>>>>>>> >> > 3. Ran dev/check-license
>>>>>>>>>>> >> > 4. Ran `gradlew build`
>>>>>>>>>>> >> >
>>>>>>>>>>> >> > All passed.
>>>>>>>>>>> >> >
>>>>>>>>>>> >> > On Sun, Sep 7, 2025 at 10:36 PM Steven Wu <
>>>>>>>>>>> stevenz...@gmail.com> wrote:
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >> +1 (binding)
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >> Verified signature, checksum, license
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >> Ran build successfully (except for a couple of Spark
>>>>>>>>>>> extension tests due to my env)
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >> Ran Spark 4.0 SQL with V3 format and Java 21
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >> - Insert
>>>>>>>>>>> >> >> - Update carries over row id and sets snapshot seq num
>>>>>>>>>>> correctly
>>>>>>>>>>> >> >> - Select with _row_id and _last_updated_sequence_number
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >> Run Flink 2.0 SQL testing with V2 format and Java 21
>>>>>>>>>>> >> >> - Insert
>>>>>>>>>>> >> >> - Streaming read
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >> Thanks,
>>>>>>>>>>> >> >> Steven
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >>
>>>>>>>>>>> >> >> On Sat, Sep 6, 2025 at 10:19 PM Yuya Ebihara <
>>>>>>>>>>> yuya.ebih...@starburstdata.com> wrote:
>>>>>>>>>>> >> >>>
>>>>>>>>>>> >> >>> +1 (non-binding)
>>>>>>>>>>> >> >>>
>>>>>>>>>>> >> >>> Confirmed that Trino CI is green in Trino PR #25795.
>>>>>>>>>>> >> >>> It runs tests against several catalogs, including HMS,
>>>>>>>>>>> Glue, JDBC (PostgreSQL), REST (Polaris, Unity, S3 Tables, Tabular), 
>>>>>>>>>>> Nessie,
>>>>>>>>>>> and Snowflake.
>>>>>>>>>>> >> >>>
>>>>>>>>>>> >> >>> Yuya
>>>>>>>>>>> >> >>>
>>>>>>>>>>> >> >>> On Sun, Sep 7, 2025 at 1:38 PM Aihua Xu <
>>>>>>>>>>> aihu...@gmail.com> wrote:
>>>>>>>>>>> >> >>>>
>>>>>>>>>>> >> >>>> I have verified the signature and checksum, completed
>>>>>>>>>>> the build and unit tests, and ran basic Spark table creation and 
>>>>>>>>>>> queries.
>>>>>>>>>>> >> >>>>
>>>>>>>>>>> >> >>>> I also executed the tests against our Snowflake internal
>>>>>>>>>>> test suite. One test failure was observed, related to snapshot 
>>>>>>>>>>> expiry,
>>>>>>>>>>> caused by Iceberg PR #13614 — “Fix incorrect selection of 
>>>>>>>>>>> incremental
>>>>>>>>>>> cleanup in expire snapshots.” I believe our test should be updated 
>>>>>>>>>>> to
>>>>>>>>>>> reflect the behavior introduced by this fix.
>>>>>>>>>>> >> >>>>
>>>>>>>>>>> >> >>>> +1 (non-binding).
>>>>>>>>>>> >> >>>>
>>>>>>>>>>> >> >>>>
>>>>>>>>>>> >> >>>>
>>>>>>>>>>> >> >>>> On Fri, Sep 5, 2025 at 11:50 AM Steven Wu <
>>>>>>>>>>> stevenz...@gmail.com> wrote:
>>>>>>>>>>> >> >>>>>
>>>>>>>>>>> >> >>>>> Hi Everyone,
>>>>>>>>>>> >> >>>>>
>>>>>>>>>>> >> >>>>> I propose that we release the following RC as the
>>>>>>>>>>> official Apache Iceberg 1.10.0 release.
>>>>>>>>>>> >> >>>>>
>>>>>>>>>>> >> >>>>> The commit ID is
>>>>>>>>>>> 2114bf631e49af532d66e2ce148ee49dd1dd1f1f
>>>>>>>>>>> >> >>>>> * This corresponds to the tag: apache-iceberg-1.10.0-rc5
>>>>>>>>>>> >> >>>>> *
>>>>>>>>>>> https://github.com/apache/iceberg/commits/apache-iceberg-1.10.0-rc5
>>>>>>>>>>> >> >>>>> *
>>>>>>>>>>> https://github.com/apache/iceberg/tree/2114bf631e49af532d66e2ce148ee49dd1dd1f1f
>>>>>>>>>>> >> >>>>>
>>>>>>>>>>> >> >>>>> The release tarball, signature, and checksums are here:
>>>>>>>>>>> >> >>>>> *
>>>>>>>>>>> https://dist.apache.org/repos/dist/dev/iceberg/apache-iceberg-1.10.0-rc5
>>>>>>>>>>> >> >>>>>
>>>>>>>>>>> >> >>>>> You can find the KEYS file here:
>>>>>>>>>>> >> >>>>> * https://downloads.apache.org/iceberg/KEYS
>>>>>>>>>>> >> >>>>>
>>>>>>>>>>> >> >>>>> Convenience binary artifacts are staged on Nexus. The
>>>>>>>>>>> Maven repository URL is:
>>>>>>>>>>> >> >>>>> *
>>>>>>>>>>> https://repository.apache.org/content/repositories/orgapacheiceberg-1269/
>>>>>>>>>>> >> >>>>>
>>>>>>>>>>> >> >>>>> Please download, verify, and test.
>>>>>>>>>>> >> >>>>>
>>>>>>>>>>> >> >>>>> Instructions for verifying a release can be found here:
>>>>>>>>>>> >> >>>>> *
>>>>>>>>>>> https://iceberg.apache.org/how-to-release/#how-to-verify-a-release
>>>>>>>>>>> >> >>>>>
>>>>>>>>>>> >> >>>>> Please vote in the next 72 hours.
>>>>>>>>>>> >> >>>>>
>>>>>>>>>>> >> >>>>> [ ] +1 Release this as Apache Iceberg 1.10.0
>>>>>>>>>>> >> >>>>> [ ] +0
>>>>>>>>>>> >> >>>>> [ ] -1 Do not release this because...
>>>>>>>>>>> >> >>>>>
>>>>>>>>>>> >> >>>>> Only PMC members have binding votes, but other
>>>>>>>>>>> community members are encouraged to cast
>>>>>>>>>>> >> >>>>> non-binding votes. This vote will pass if there are 3
>>>>>>>>>>> binding +1 votes and more binding
>>>>>>>>>>> >> >>>>> +1 votes than -1 votes.
>>>>>>>>>>>
>>>>>>>>>>

Reply via email to