I would like to report from a personal experiment.  Nudged towards looking 
at the codecov warnings by tscrim, I learned that several of them actually 
uncovered truly (non-obviously) dead code in 
https://github.com/sagemath/sage/pull/38446.  Making sure that the code 
code indeed not be reached by creating corresponding tests was a bit of 
work, but I think in the long run it pays.

(I completely support the current decision, though.)

Best wishes,

Martin
On Wednesday 16 October 2024 at 17:06:56 UTC+2 dim...@gmail.com wrote:

> On Sat, Oct 5, 2024 at 6:06 PM John H Palmieri <jhpalm...@gmail.com> 
> wrote:
> >
> >
> >
> > On Saturday, October 5, 2024 at 6:38:20 AM UTC-7 Kwankyu Lee wrote:
> >
> > .... But I don't know how big a problem the codecov issue is ...
> >
> >
> > We want to regard the check failure as there is a problem with the PR 
> that the author should resolve.
> >
> > Currently the codecov failure triggers the check failure, but no 
> reviewer and no author regard the codecov failure as a problem with the PR 
> (this is the practice that you are used to)
> >
> > The check failure by the codecov failure is just annoying.
> >
> >
> > Still, "untested is broken", right?
> >
> >
> > This is still a good maxim. But our practice is "broken is then tested". 
> I think our practice is not bad. Testing every code path would bloat our 
> set of doctests.
> >
> >
> > There are ways to "bloat" the set of doctests with minimal impact. For 
> example, we could create a file "TESTS.py" (for example) in a Sage module, 
> consisting only of doctests. It would not be included in the reference 
> manual, not visible when someone does "X.my_favorite_method?" or 
> "X.my_favorite_method??", and since it's a separate file, many developers 
> wouldn't interact with it at all. There may already be some files like that 
> in the Sage library.
> >
> > I don't know if this approach is worth it, but it does provide a way to 
> add more doctests with minimal impact on most users and developers.
>
> It can be doctests, or other kinds of tests - all driven by pytest, as
> done in e.g.
> src/sage/manifolds/differentiable/symplectic_form_test.py
>
>
> But yes, I'm for (3), i.e. for stopping the codecov from triggering a 
> failure.
>
> Dima
>
> >
> > John
> >
> >
> >
> > --
> > You received this message because you are subscribed to the Google 
> Groups "sage-devel" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sage-devel+...@googlegroups.com.
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sage-devel/67c8b9ae-593f-4f43-bb4a-2cf10c241e02n%40googlegroups.com
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/sage-devel/101780e9-78d0-4a3b-8358-86e88c16232dn%40googlegroups.com.

Reply via email to