Le ven. 9 mai 2025 à 16:27, Arnout Engelen <enge...@apache.org> a écrit :
>
> On Fri, May 9, 2025 at 3:33 PM Gary Gregory <garydgreg...@gmail.com> wrote:
> > I think we need to build with the next EA Java version especially when the
> > next Java version is an LTS version like 25. This serves as an early
> > warning signal for work to do but also as playing our part in the FOSS at
> > large ecosystem where we should report failures to the 3rd party tooling we
> > use.
>
> I agree.
>
> > Some other builds are experimental for released versions of Java like 24
> > because they are known to fail and serve as either a reminder of work to do
> > or a warning to users.
>
> OK
>
> > All these GH CI builds show the world what we test and what to expect.
> >
> > If a user sees a failing experimental build we can offer it as a to do for
> > contributions which I've done in the past.
> >
> > In general, I think we should build with all LTS versions plus the next EA
> > version as experimental. Even if an EA build is green it should stay
> > experimental since the EA behavior could be a moving target.
> >
> > IOW, please don't remove experimental builds.
>
> Oh I didn't mean to suggest to 'simply' remove the experimental
> builds. But I think it might be worth considering changing/replacing
> them.
>
> One option would be to, instead of having a failing build in the
> matrix, have an open draft pull request that adds that (still failing)
> build.
>
> I think a PR does a better job of playing our part in the ecosystem:
> it allows us to add comments to the pull request highlighting the
> saillant part of the failure, sharing analysis of the root cause,
> possible fixes, and links to other projects that may have the same
> problem (or even a solution).
>
> I think it also does a better job of showing the world what we test
> and what to expect: to the world, a failing job means "this project
> intended to support this, but it broke, and it seems poorly maintained
> because nobody fixed the breakage". An open PR conveys "we're still
> working on this" without us having to explain 'experimental builds'.
>
> I think this approach also does a better job of tracking regressions
> once an experimental build has been green: right now, I bet we
> wouldn't notice if a once-green experimental build started failing -
> experimental builds fail all the time anyway. I'd propose we merge the
> experimental build to the main branch as soon as it's green, and have
> it fail the build normally. That way, if that happens, we notice and
> can either fix it directly or remove it from CI (and add another PR to
> track fixing it). That is a little more work, but valuable in
> detecting regressions (which we can then report upstream if they turn
> out to be unintentional).
>
> WDYT? Perhaps at least worth trying out on a pilot component?

I did not understand everything, but it sounds like a well ironed out plan.
;-)

For experimenting, do you need components that change often, or that
are fairly stable?  If the latter, you are welcome to use any (or all of) the
math-related ones:
 * RNG
 * Numbers
 * Geometry
 * Statistics
 * Math

Somewhat unrelated (or maybe not), I wonder whether benchmarks
(designed by Alex) in these components could be performed automatically,
and a slow-down would count as regression.

Regards,
Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to