Hi all,

The offer by Martin Thomas to contribute testing feedback inspired me.
I think that it would be an incredible asset for the OpenOCD maintainers
to be able to see a current support matrix, since no one individual can
reasonably be expected to test all platforms, interfaces, and targets.

So, it would be nice if we could provide tools to allow (voluntary)
reporting of test results when using different platform/interface/target
combinations with whatever version of OpenOCD.  Here are the tools:

0) provide a standard benchmarking script to generate test results.
1) allow users to report their test results and add their feedback.
2) collect the information from those reports for each release/version.
3) process the sets of test results for display and analysis.
4) displays the statistical results to give us a complete picture.
5) use machine learning to predict problems with incoming patches, to
allow testers to know when and what to re-test.

It will be difficult to prove that we are making uniformly forward
progress without a system for producing and processing such data.

Are there existing open source tools that can be used for this purpose?
If not, does anyone out there want to tackle these modest challenges?
I would have plenty of feedback for anyone that feels they need a more
detailed implementation plan to follow, but I think that any working
solution would be nice to have in hand.

Cheers,

Zach

_______________________________________________
Openocd-development mailing list
Openocd-development@lists.berlios.de
https://lists.berlios.de/mailman/listinfo/openocd-development

Reply via email to