From: Alexis Lothoré <alexis.loth...@bootlin.com> When regression report is computed during a CI build, a lot of errors often appears regarding missing test status:
ERROR: Failed to retrieved base test case status: ptestresult.sections ERROR: Failed to retrieved base test case status: ptestresult.sections ERROR: Failed to retrieved base test case status: reproducible ERROR: Failed to retrieved base test case status: reproducible.rawlogs [...] Those errors are caused by entries in test results which are not exactly test results (i.e. an entry with a relevant "status" field containing value such as "PASSED", "FAILED, "SKIPPED", etc) but additional data, which depends on the log parser associated to the test, or tests which store results in a different way. For example, the ptestresult.sections entry is generated by the ptest log parser and can contain additional info about ptest such as "begin", "end", "duration", "exitcode" or "timeout". Another example is a "reproducible" section, which does not have a "status" field but rather contains a big "files" entry containing lists of identical, missing, or different files between two builds. Remove those errors by adding a list of known entries which do not hold test results as expected by resulttool, and by ignoring those keys when encountered during test results comparison. I could also have completely removed the warning about missing test case status, but that would silently hide any real future issue with relevant test results Signed-off-by: Alexis Lothoré <alexis.loth...@bootlin.com> --- scripts/lib/resulttool/regression.py | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/scripts/lib/resulttool/regression.py b/scripts/lib/resulttool/regression.py index 3d64b8f4af7c..e15a268c0206 100644 --- a/scripts/lib/resulttool/regression.py +++ b/scripts/lib/resulttool/regression.py @@ -78,6 +78,16 @@ STATUS_STRINGS = { "None": "No matching test result" } +TEST_KEY_WHITELIST = [ + "ltpposixresult.rawlogs", + "ltpposixresult.sections", + "ltpresult.rawlogs", + "ltpresult.sections", + "ptestresult.sections", + "reproducible", + "reproducible.rawlogs" +] + def test_has_at_least_one_matching_tag(test, tag_list): return "oetags" in test and any(oetag in tag_list for oetag in test["oetags"]) @@ -189,6 +199,10 @@ def compare_result(logger, base_name, target_name, base_result, target_result): if base_result and target_result: for k in base_result: + # Some entries present in test results are known not to be test + # results but metadata about tests + if k in TEST_KEY_WHITELIST: + continue base_testcase = base_result[k] base_status = base_testcase.get('status') if base_status: -- 2.42.0
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#188770): https://lists.openembedded.org/g/openembedded-core/message/188770 Mute This Topic: https://lists.openembedded.org/mt/101797859/21656 Group Owner: openembedded-core+ow...@lists.openembedded.org Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-