Hi Richard,

Great thanks for your feedbacks!
I will work on the improvement to address your concerns. 

Best regards,
Ee Peng

-----Original Message-----
From: richard.pur...@linuxfoundation.org 
[mailto:richard.pur...@linuxfoundation.org] 
Sent: Tuesday, September 11, 2018 11:09 PM
To: Yeoh, Ee Peng <ee.peng.y...@intel.com>; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH] test-result-log: testcase management tool to 
store test result

Hi Ee Peng,

I've been having a look at this code and whilst some of it is good, I also have 
some concerns. With it coming in so late in the cycle, its made it hard to have 
time to review it and allow time to get it right.
With something as important as this to the way the future QA work is done, we 
do need to ensure we use the right approach and that its maintainable.

The patches are ok as a first attempt at this. My biggest concern is that its 
currently parsing log files which we control and generate within out own 
codebase and that parsing is likely to break. In particular, these lines worry 
me from the qalogparser:

regex = ".*RESULTS - (?P<case_name>.*) - Testcase .*: 
(?P<status>PASSED|FAILED|SKIPPED|ERROR|UNKNOWN).*$"
regex = "core-image.*().*Ran.*tests in .*s"
regex = "DEBUG: launchcmd=runqemu*
qemu_list = ['qemuarm', 'qemuarm64', 'qemumips', 'qemumips64', 'qemuppc', 
'qemux86', 'qemux86-64']

since here we're hardcoding the list of qemu's we support, we're also only 
allowing core-image-* and we're relying upon the results format not changing. 
That makes it hard for anyone to extend/reuse this or to use it with real 
hardware?

For example the recent oe-selftest parallelisation code did change the output 
of the tests slightly. I'm not sure if this broke the parsing or not but it is 
an example of the kind of fragility this code has.

What would probably work better for us is if the oeqa framework wrote out a 
json file directly containing the information we need in it, then this code 
would just need to collect up the json files.

I'm also a little concerned at the way unittest discovery is being done, 
grepping for *.py files as far as I understand it. We should probably use the 
list options to the various current test pieces? Also, this is something we 
probably only ever need to do once to seed the QA results store?

Finally, much of the code is using "internal" methods prefixed with "_". I can 
understand why but it seems the code doesn't have a good well structured public 
API as a result.

As such there may me a little too much work needed on this to get it in for 2.6 
:(

Cheers,

Richard
-- 
_______________________________________________
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core

Reply via email to