On Monday, June 22, 2020 at 6:18:11 AM UTC-7, Michael Orlitzky wrote:
>
> Doing so won't consume any extra time 
> on any individual machine, and the multitudes of reviewers and patchbots 
> running the test on different examples will ferret out any corner cases. 
>
> One of the concerns is that you won't get very good error reports this 
way: without good condition reporting, you might get reports "sometimes 
this test fails" and in many cases people will probably not bother 
reporting because the problem went away the next time. In my experience, 
"corner cases" in mathematical algorithms are often NOT found with random 
input, because the probability measure of their triggering inputs is 
vanishingly small.

A cheap compromise might be to make the "starting seed" for tests 
configurable. The default would be just the seed we have now, but if people 
want to set it to another value, they can go ahead. It could be helpful if 
the value used is part of the test result report. If people want, they can 
then just script the testing with variable seeds. It would offer a cheap 
workaround to get a bit more coverage in your tests (at some point) if you 
think your code is particularly sensitive to output from the random 
generator.

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/91dd85dd-71f4-4b0f-984a-3b9852b6bb61o%40googlegroups.com.

Reply via email to