Hi,
On Mon, May 30, 2022 at 10:03 AM Stephan Bergmann <[email protected]> wrote: > On 25/05/2022 14:38, Maarten Hoes wrote: > > gb_GCOV=YES verbose=t make UITest_solver > > gb_GCOV=YES verbose=t make CppunitTest_sccomp_solver > > gb_GCOV=YES verbose=t make CppunitTest_sccomp_swarmsolvertest > > I /think/ to remember that at least some of those solver tests are time > based, so that they may occasionally fail for slow builds. (I've seen > such *Test_*solver* fail on and off for > <https://ci.libreoffice.org/job/lo_ubsan/> and/or my local ASan+UBSan > builds, tried to look into it a long time ago, got confirmation from I > can't remember who that those tests can indeed timeout unsuccessfully > without necessarily indicating a failure, and thus started to ignore > them ever since.) > That would be a shame. I got the impression that, for me specifically, these 3 tests failed reliably/reproducible when built with gcov, and succeeded without gcov, but I'll do some more testing to verify that. Anyway, if more tests fail sometimes without there being a 'real' failure, then I am not sure how to deal with that in relation to generating an lcov report for a full build. My initial idea was to not generate a report if 'make check' failed, but hearing this now makes me wonder if that would be a good approach. Perhaps it would be preferred just to run 'make -k check' and generate a report always, regardless of test failures ? Of course you could also choose to skip such tests, but that would lead to not representative results; and someone would have to manually keep the skip list up to date, which people will forget to do. - Maarten
