Hi,

On Wed, Jun 03, 2015 at 02:33:23PM +0100, Michael Meeks wrote:
>       Constructive thoughts appreciated in reply here.

This has some synergies with the dashboard proposal[1] but it is sufficiently
distict to be included as an ideas on its own:

We lack a good representation of test status. tinderbox.libreoffice.org is only
very basic and overloads one view trying to show way too much at one:
- e.g. http://tinderbox.libreoffice.org/MASTER/status.html shows:
    - when something broke
    - where (on which machine) something broke
      (which is already creates a ~useless table a fullscreen wide on
      my 30" 21:9 screen at default zoom levels -- quite an "achievement")
  however it doesnt show:
    - _what_ (which test or module) broke

The "what" is the most important information for a developer checking if he
broke something, followed by the "when", while the "on which machine" is
less relevant in most cases.

If we really want to value tests we should be able to present a view on our
automated testing that shows what broke first -- and then allows ivestigating
the when and where from there. Take the "Test Statistics Grid" at the end of:

 https://jenkins.qa.ubuntu.com/view/vivid/view/AutoPkgTest/

as an starting point, showing e.g. the status and the stability of each and 
every
CppunitTest_ JunitTest_ and PythonTest_ individually.

Best,

Bjoern


[1] 
http://nabble.documentfoundation.org/TDF-Grant-Request-Proposal-LibreOffice-project-dashboard-quot-All-about-LibreOffice-quot-td4151652.html
_______________________________________________
LibreOffice mailing list
LibreOffice@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/libreoffice

Reply via email to