On Sunday, March 16, 2014 20:30:15 Daniel Vetter wrote: > On Sat, Mar 15, 2014 at 07:39:45PM -0700, Dylan Baker wrote: > > On Saturday, March 15, 2014 08:41:15 AM Ilia Mirkin wrote: > > > On Sat, Mar 15, 2014 at 8:29 AM, Daniel Vetter <[email protected]> > > > > wrote: > > > > On Fri, Mar 14, 2014 at 07:41:04PM -0700, Dylan Baker wrote: > > > >> [snip] > > > >> > > > >> > > I'll throw a patch at the end of the series, do you want me to > > > > send > > > > > >> I'm gonna take it back, sorry. I don't know that dmesg-warn > > > > should be > > > > > >> worse than warn, (same for fail) since pass -> dmesg-warn, warn > > > > -> > > > > > >> dmesg-fail, and fail -> dmesg-fail. Personally I was never a fan of > > > >> having > > > >> special dmesg- statuses, I feel that a fail is a fail and warn is a > > > > warn, > > > > > >> but I'm not sure that change is correct. > > > > > > > > The current ordering seems wrong to me, e.g. if you have a failing > > > > tests > > > > > > and fix up some dmesg noise you now have a regression. > > > > > > And if you add dmesg noise, you have a fix :) printk(), here I come! > > > > > > On a mildly note, am I the only one who thinks it's weird that > > > transitions to/from (skip, notrun) are considered fixes/regressions? > > > > > > -ilia > > > > I agree, that was changed be someone from my original > > implementation, but obviously it was changed so at least one person > > feels the current behavior is correct. > > As mentioned such transitions make sense for the kernel where we never > break abi or disable old features (well, until the last user/hw > disappeared at least). Hence a fail->skip is a regression (probably the > kernel broke a feature flag) and fail->notrun is a regression (probably > the testcase is broken and dropped a subtest somehow). > > fail->notrun has a bit a downside when doing a massive testcase renaming > for better consistency, but thus far we've only had one case where we've > done a bit of large-scale renaming in the last two years. > -Daniel
I don't want to beat a dead horse, so here's the argument from those of us who don't like the current behavior, and then I'll let it be: The big problem is that if your workflow is to do a baseline run, and then do smaller focused runs of "Oh, I regressed tests A, B, and C; just run those till I fix them, then run the whole thing again" then using the regressions/fixes pages are impossible, they're just full of "Not Run" and "Skips". I know that this is a pretty common workflow for hardware bring up. I'm working on some patches to add pages to summary (Skip||NotRun -> Any) and the converse and remove them as fixes/regressions. I'm hoping to send them out tomorrow, and that they will provide useful data for those who really want to see notrun/skip changes, and allow those who run subsets of the test suite still get useful information from the fixes/regressions pages. - Dylan
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ Piglit mailing list [email protected] http://lists.freedesktop.org/mailman/listinfo/piglit
