Hi Richard, 

Agreed to you inputs! 
I had created a new QA process wiki page to compile all inputs. 
https://wiki.yoctoproject.org/wiki/New_QA_process

> Do we need to add something to the results to indicate which results are from 
> "who"? (i.e. from > the public autobuilder, Intel QA, WR QA, any other 
> sources?). We may want to add something >such as a parameter to the 
> resulttool store command so we can add this info to results?

I had enhanced the resulttool following your suggestion. Patches submitted to 
openembedded-core mailing list.

Best regards,
Yeoh Ee Peng 

-----Original Message-----
From: richard.pur...@linuxfoundation.org 
[mailto:richard.pur...@linuxfoundation.org] 
Sent: Wednesday, April 3, 2019 4:18 AM
To: Yeoh, Ee Peng <ee.peng.y...@intel.com>; 'yocto@yoctoproject.org' 
<yocto@yoctoproject.org>
Subject: Re: Proposed new QA process

This is a good start. I've filled out some details below and had some questions.

On Tue, 2019-04-02 at 03:28 +0000, Yeoh, Ee Peng wrote:
> Given the new QA tooling (resulttool) available to manage QA test 
> results and reporting, here was the proposed new QA process.
>  
> The new QA process consists below:
> ·       Test Trigger
> ·       Test Planning
> ·       Test Execution
> ·       Test Result Store
> ·       Test Monitoring & Reporting
> ·       Release Decision
>  
> Test Trigger: Each QA team will subscribe to QA notification email 
> (request through Richard Purdie).

The list of notifications is maintained on config.json in the yocto- 
autobuilder-helper which has a branch per release.
 
> Test Planning: The lead QA team no longer need to setup Testopia and 
> wiki page.  Each QA team (eg. intel, windriver, etc) will perform 
> planning on what extra tests they plan to run and when they'll send 
> the results back, then send these planning information as acknowledge 
> email to QA stakeholders (eg. Richard Purdie, Stephen Jolley) and the 
> lead QA team.  Each QA team can refer to OEQA for automated and manual 
> test cases for their planning.

What form will these notifications take? Is there a time limit for when they'll 
be received after a QA notification email? Can we agree to include an estimate 
of execution time in this notification?

> Test Execution: Each QA team will execute the planned extra tests. To 
> make sure test result from the test execution could fully integrated 
> to the new QA tooling (resulttool for test result management and 
> reporting/regression), execute OEQA automated tests and OEQA manual 
> tests through resulttool (refer 
> https://wiki.yoctoproject.org/wiki/Resulttool#manualexecution).
>  
> Test Result Store: Each QA team will store test result to the remote 
> yocto-testresults git repository using resulttool (refer 
> https://wiki.yoctoproject.org/wiki/Resulttool#store), then send the QA 
> completion email (include new defects information) to both QA 
> stakeholder and the lead QA team.  Each QA team will request write 
> access to remote yocto-testresults git repository (request through 
> Richard Purdie).

Ultimately, yes but I want to have things working before we have multiple 
people pushing things there. Need to document the commands used here to add the 
results too.

Do we need to add something to the results to indicate which results are from 
"who"? (i.e. from the public autobuilder, Intel QA, WR QA, any other sources?). 
We may want to add something such as a parameter to the resulttool store 
command so we can add this info to results?
 
> Test Monitoring & Reporting: QA stakeholder will monitor testing 
> progress from remote yocto-testresults git repository using resulttool 
> (refer https://wiki.yoctoproject.org/wiki/Resulttool#report).  Once 
> every QA team completed the test execution, the lead QA team will 
> create QA test report and regression using resulttool. Send email 
> report to QA stakeholder and public yocto mailing list.

We should document the command to do this. I'm also wondering where the list of 
stakeholders would be maintained?

Is Intel volunteering to help with this role for the time being or does someone 
else need to start doing this.

A key thing we need to document here is that someone, somewhere in this process 
needs to:

a) Open a bug for each unique QA test failure
b) List the bugs found in the QA report
c) Notice any ptest timeouts and file bugs for those too

> Release Decision: QA stakeholder will make the final decision for 
> release.

The release decision will ultimately be made by the Yocto Project TSC who will 
review the responses to the QA report (including suggestions from QA) and make 
a go/nogo decision on that information (exact process to be agreed by the TSC).

Cheers,

Richard


--- Begin Message ---
v01:
Enable merging results where both base and target are directory.

v02:
Follow suggestion from RP to reused the existing resultutils code base where it 
will merge results to flat file.
Refactor resultutils code base to enable merging results where both base and 
target are directory.
Add control for creation of testseries configuration.

v03:
Follow suggestion from RP to breakdown the patches to ease reviewing.
Enable resultutils library to control the adding of "TESTSERIES"
configuration as well as allow adding extra configurations when needed.
Enable store to add "EXECUTED_BY" configuration to track who executed
each results file.
Enable merge to control the adding of "TESTSERIES" configuration
as well as allow adding "EXECUTED_BY" configuration when needed.

Yeoh Ee Peng (3):
  resulttool/resultutils: Enable add extra configurations to results
  resulttool/store: Enable add EXECUTED_BY config to results
  resulttool/merge: Enable control TESTSERIES and extra configurations

 scripts/lib/resulttool/merge.py       | 20 ++++++++++++++------
 scripts/lib/resulttool/resultutils.py | 19 ++++++++++++-------
 scripts/lib/resulttool/store.py       |  7 ++++++-
 3 files changed, 32 insertions(+), 14 deletions(-)

--
2.7.4


--- End Message ---
--- Begin Message ---
Current results stored does not have information needed to trace who
executed the tests. Enable store to add EXECUTED_BY configuration
to results file in order to track who executed the tests.

Signed-off-by: Yeoh Ee Peng <ee.peng.y...@intel.com>
---
 scripts/lib/resulttool/store.py | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/scripts/lib/resulttool/store.py b/scripts/lib/resulttool/store.py
index 5e33716..7e89692 100644
--- a/scripts/lib/resulttool/store.py
+++ b/scripts/lib/resulttool/store.py
@@ -27,13 +27,16 @@ import oeqa.utils.gitarchive as gitarchive
 def store(args, logger):
     tempdir = tempfile.mkdtemp(prefix='testresults.')
     try:
+        configs = resultutils.extra_configs.copy()
+        if args.executed_by:
+            configs['EXECUTED_BY'] = args.executed_by
         results = {}
         logger.info('Reading files from %s' % args.source)
         for root, dirs,  files in os.walk(args.source):
             for name in files:
                 f = os.path.join(root, name)
                 if name == "testresults.json":
-                    resultutils.append_resultsdata(results, f)
+                    resultutils.append_resultsdata(results, f, configs=configs)
                 elif args.all:
                     dst = f.replace(args.source, tempdir + "/")
                     os.makedirs(os.path.dirname(dst), exist_ok=True)
@@ -96,4 +99,6 @@ def register_commands(subparsers):
                               help='include all files, not just 
testresults.json files')
     parser_build.add_argument('-e', '--allow-empty', action='store_true',
                               help='don\'t error if no results to store are 
found')
+    parser_build.add_argument('-x', '--executed-by', default='',
+                              help='add executed-by configuration to each 
result file')
 
-- 
2.7.4


--- End Message ---
--- Begin Message ---
Current QA team need to merge test result files from multiple sources.
Adding TESTSERIES configuration too early will have negative
implication to report and regression. Enable control to add TESTSERIES
when needed. Also enable adding EXECUTED_BY configuration when
needed.

Signed-off-by: Yeoh Ee Peng <ee.peng.y...@intel.com>
---
 scripts/lib/resulttool/merge.py | 20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/scripts/lib/resulttool/merge.py b/scripts/lib/resulttool/merge.py
index 3e4b7a3..d40a72d 100644
--- a/scripts/lib/resulttool/merge.py
+++ b/scripts/lib/resulttool/merge.py
@@ -17,16 +17,21 @@ import json
 import resulttool.resultutils as resultutils
 
 def merge(args, logger):
+    configs = {}
+    if not args.not_add_testseries:
+        configs = resultutils.extra_configs.copy()
+    if args.executed_by:
+        configs['EXECUTED_BY'] = args.executed_by
     if os.path.isdir(args.target_results):
-        results = resultutils.load_resultsdata(args.target_results, 
configmap=resultutils.store_map)
-        resultutils.append_resultsdata(results, args.base_results, 
configmap=resultutils.store_map)
+        results = resultutils.load_resultsdata(args.target_results, 
configmap=resultutils.store_map, configs=configs)
+        resultutils.append_resultsdata(results, args.base_results, 
configmap=resultutils.store_map, configs=configs)
         resultutils.save_resultsdata(results, args.target_results)
     else:
-        results = resultutils.load_resultsdata(args.base_results, 
configmap=resultutils.flatten_map)
+        results = resultutils.load_resultsdata(args.base_results, 
configmap=resultutils.flatten_map, configs=configs)
         if os.path.exists(args.target_results):
-            resultutils.append_resultsdata(results, args.target_results, 
configmap=resultutils.flatten_map)
+            resultutils.append_resultsdata(results, args.target_results, 
configmap=resultutils.flatten_map, configs=configs)
         resultutils.save_resultsdata(results, 
os.path.dirname(args.target_results), fn=os.path.basename(args.target_results))
-
+    logger.info('Merged results to %s' % os.path.dirname(args.target_results))
     return 0
 
 def register_commands(subparsers):
@@ -39,4 +44,7 @@ def register_commands(subparsers):
                               help='the results file/directory to import')
     parser_build.add_argument('target_results',
                               help='the target file or directory to merge the 
base_results with')
-
+    parser_build.add_argument('-t', '--not-add-testseries', 
action='store_true',
+                              help='do not add testseries configuration to 
results')
+    parser_build.add_argument('-x', '--executed-by', default='',
+                              help='add executed-by configuration to each 
result file')
-- 
2.7.4


--- End Message ---
--- Begin Message ---
Current resultutils library always add "TESTSERIES" configuration
to results. Enhance this to allow control of adding "TESTSERIES"
configuration as well as allow adding extra configurations
when needed.

Signed-off-by: Yeoh Ee Peng <ee.peng.y...@intel.com>
---
 scripts/lib/resulttool/resultutils.py | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/scripts/lib/resulttool/resultutils.py 
b/scripts/lib/resulttool/resultutils.py
index 153f2b8..bfbd381 100644
--- a/scripts/lib/resulttool/resultutils.py
+++ b/scripts/lib/resulttool/resultutils.py
@@ -39,10 +39,12 @@ store_map = {
     "manual": ['TEST_TYPE', 'TEST_MODULE', 'MACHINE', 'IMAGE_BASENAME']
 }
 
+extra_configs = {'TESTSERIES': ''}
+
 #
 # Load the json file and append the results data into the provided results dict
 #
-def append_resultsdata(results, f, configmap=store_map):
+def append_resultsdata(results, f, configmap=store_map, configs=extra_configs):
     if type(f) is str:
         with open(f, "r") as filedata:
             data = json.load(filedata)
@@ -51,12 +53,15 @@ def append_resultsdata(results, f, configmap=store_map):
     for res in data:
         if "configuration" not in data[res] or "result" not in data[res]:
             raise ValueError("Test results data without configuration or 
result section?")
-        if "TESTSERIES" not in data[res]["configuration"]:
-            data[res]["configuration"]["TESTSERIES"] = 
os.path.basename(os.path.dirname(f))
+        for config in configs:
+            if config == "TESTSERIES" and "TESTSERIES" not in 
data[res]["configuration"]:
+                data[res]["configuration"]["TESTSERIES"] = 
os.path.basename(os.path.dirname(f))
+                continue
+            if config not in data[res]["configuration"]:
+                data[res]["configuration"][config] = configs[config]
         testtype = data[res]["configuration"].get("TEST_TYPE")
         if testtype not in configmap:
             raise ValueError("Unknown test type %s" % testtype)
-        configvars = configmap[testtype]
         testpath = "/".join(data[res]["configuration"].get(i) for i in 
configmap[testtype])
         if testpath not in results:
             results[testpath] = {}
@@ -72,16 +77,16 @@ def append_resultsdata(results, f, configmap=store_map):
 # Walk a directory and find/load results data
 # or load directly from a file
 #
-def load_resultsdata(source, configmap=store_map):
+def load_resultsdata(source, configmap=store_map, configs=extra_configs):
     results = {}
     if os.path.isfile(source):
-        append_resultsdata(results, source, configmap)
+        append_resultsdata(results, source, configmap, configs)
         return results
     for root, dirs, files in os.walk(source):
         for name in files:
             f = os.path.join(root, name)
             if name == "testresults.json":
-                append_resultsdata(results, f, configmap)
+                append_resultsdata(results, f, configmap, configs)
     return results
 
 def filter_resultsdata(results, resultid):
-- 
2.7.4


--- End Message ---
-- 
_______________________________________________
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto

Reply via email to