On Mon, 2020-06-15 at 18:41 +0200, Alexander Kanavin wrote:
> On Mon, 15 Jun 2020 at 16:02, Richard Purdie <
> richard.pur...@linuxfoundation.org> wrote:
> > I can see the use case, I'm a bit torn on whether we should fail in
> > these cases, or whether we should enourage people to check the
> > tests
> > they expected to run really did.
> > 
> > With the complexity on the autobuilder we've had to rely on the
> > latter,
> > comparing that all tests that ran previously, still run.
> 
> Do we have some kind of tooling to check that tests that are expected
> to run, did run, and were not skipped? 
> This patch came from our internal situation where due to debian
> renaming, @OEHasPackage started skipping tests
> that it should not have. It wasn't immediately noticed - test logs
> are hidden inside the build logs, and the build logs are not
> usually looked at if the overall build does not fail.
> 
> People were very baffled by that, and it took 'yocto experts' (Konrad
> :) to sort the issue.

We have "resulttool regression" which is meant to cover this. We're
struggling a little to get the automated comparisons right (its hard to
know what to compare against exactly) but if you have a known correct
set of tests it should work.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#139510): 
https://lists.openembedded.org/g/openembedded-core/message/139510
Mute This Topic: https://lists.openembedded.org/mt/74855190/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to