I would second this notion that manually running tests that are already
covered as part of CI as part of the release process is of (very) limited
value.

While we do the same thing (compile and run some tests) as part of the Rust
release this has never caught any serious defect I am aware of and we only
run a subset of tests (e.g. not tests for integration with other arrow
versions)

Reducing the burden for releases I think would benefit everyone.

Andrew

On Fri, Jan 19, 2024 at 1:08 PM Antoine Pitrou <anto...@python.org> wrote:

>
> Well, if the main objective is to just follow the ASF Release
> guidelines, then our verification process can be simplified drastically.
>
> The ASF indeed just requires:
> """
> Every ASF release MUST contain one or more source packages, which MUST
> be sufficient for a user to build and test the release provided they
> have access to the appropriate platform and tools. A source release
> SHOULD not contain compiled code.
> """
>
> So, basically, if the source tarball is enough to compile Arrow on a
> single platform with a single set of tools, then we're ok. :-)
>
> The rest is just an additional burden that we voluntarily inflict to
> ourselves. *Ideally*, our continuous integration should be able to
> detect any build-time or runtime issue on supported platforms. There
> have been rare cases where some issues could be detected at release time
> thanks to the release verification script, but these are a tiny portion
> of all issues routinely detected in the form of CI failures. So, there
> doesn't seem to be a reason to believe that manual release verification
> is bringing significant benefits compared to regular CI.
>
> Regards
>
> Antoine.
>
>
> Le 19/01/2024 à 11:42, Raúl Cumplido a écrit :
> > Hi,
> >
> > One of the challenges we have when doing a release is verification and
> voting.
> >
> > Currently the Arrow verification process is quite long, tedious and
> error prone.
> >
> > I would like to use this email to get feedback and user requests in
> > order to improve the process.
> >
> > Several things already on my mind:
> >
> > One thing that is quite annoying is that any flaky failure makes us
> > restart the process and possibly requires downloading everything
> > again. It would be great to have some kind of retry mechanism that
> > allows us to keep going from where it failed and doesn't have to redo
> > the previous successful jobs.
> >
> > We do have a bunch of flags to do specific parts but that requires
> > knowledge and time to go over the different flags, etcetera so the UX
> > could be improved.
> >
> > Based on the ASF release policy [1] in order to cast a +1 vote we have
> > to validate the source code packages but it is not required to
> > validate binaries locally. Several binaries are currently tested using
> > docker images and they are already tested and validated on CI. Our
> > documentation for release verification points to perform binary
> > validation. I plan to update the documentation and move it to the
> > official docs instead of the wiki [2].
> >
> > I would appreciate input on the topic so we can improve the current
> process.
> >
> > Thanks everyone,
> > Raúl
> >
> > [1] https://www.apache.org/legal/release-policy.html#release-approval
> > [2]
> https://cwiki.apache.org/confluence/display/ARROW/How+to+Verify+Release+Candidates
>

Reply via email to