Hi Everyone,

Let's try posting this again. Disregard the comments I put on the other thread.

I think this is a good time to re-think our process for testing for something that is fixed or not fixed. I think a better approach that maybe we need to consider is something similar to what the security reviews do. They flag certain special areas for needing attention from a security perspective to do "deep dives" to find out the risks of certain area and flag them as such. In the context of QA, we could easily do something like that by reusing that process with the qawanted keyword. Some suggestions to build off of this:

 * Continue doing the process when a try build is created and needing
   QA testing to make use of the qawanted keyword to call out for
   explicit QA help.
 * Add a new process where more testing is needed after a bug has
   landed for some purpose. We could reuse the qawanted keyword to call
   attention to the bug and we can take it from there. For our QA
   process, we might want to file bugs in our component (Mozilla QA) to
   track any specialized testing requests that are needed
 * Throw out the general concept of blind verifications. Maybe we
   should even take out the verified portion of the bug workflow to
   ensure this, but that may need further discussion to get agreement
   on that.

Note that not everything to test is necessarily testing that "X bug works as expected." There's other cases that may warrant calling out attention to certain bugs for explicit testing after the bug lands. Here are two examples:

 * Unprefixing of a DOM or CSS property - It's useful to call out for
   explicit testing here not necessarily for actually testing if the
   property works (as we have a lot of automation that usually does
   some of the testing here), but instead call out testing for web
   compatibility concerns to find out if there are web compatibility
   risks with unprefixing X DOM or CSS property.
 * Flagging of a risky patch needing to be sure it works - This is
   where doing a deep dive generally by formulating a test plan and
   executing it is useful.

Note - I still think it's useful for a QA driver to look through a set of bugs fixed for a certain Firefox release, it's just the process would be re-purposed for flagging a bug for needing more extensive testing for X purpose (e.g. web compatibility).

Thoughts?

Sincerely,
Jason Smith

Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.org/

On 8/10/2012 1:41 PM, Anthony Hughes wrote:
Sorry, this should have went to dev-platform...

----- Original Message -----
From: "Anthony Hughes" <ahug...@mozilla.com>
To: "dev-planning" <dev-plann...@lists.mozilla.org>
Cc: dev-qual...@lists.mozilla.org
Sent: Friday, August 10, 2012 1:40:15 PM
Subject: Fwd: Verification Culture

I started this discussion on dev-quality[1] but there has been some suggestion that the 
dev-planning list is more appropriate so I'm moving the discussion here. There's been a 
couple of great responses to the dev-quality thread so far but I won't repost them here 
verbatim. The general concensus in the feedback was that QA spending a lot of time simply 
verifying that the immediate test conditions (or test case) is not a valuable practice. 
It was suggested that it would be a far more valuable use of QA's time and be of greater 
benefit to the quality of our product if we pulled out a subset of "critical" 
issues and ran deep-diving sprints around those issues to touch on edge-cases.

I, for one, support this idea in the hypothetical form. I'd like to get various 
peoples' perspectives on this issue (not just QA).

Thank you do David Baron, Ehsan Akhgari, Jason Smith, and Boris Zbarsky for the 
feedback that was the catalyst for me starting this discussion. For reference, 
it might help to have a look at my dev-planning post[2] which spawned the 
dev-quality post, which in turn has spawned the post you are now reading.

Anthony Hughes
Mozilla Quality Engineer

1. https://groups.google.com/forum/#!topic/mozilla.dev.quality/zpK52mDE2Jg
2. https://groups.google.com/forum/#!topic/mozilla.dev.planning/15TSrCbakEc

----- Forwarded Message -----
From: "Anthony Hughes" <ahug...@mozilla.com>
To: dev-qual...@lists.mozilla.org
Sent: Thursday, August 9, 2012 5:14:02 PM
Subject: Verification Culture

Today, David Baron brought to my attention an old bugzilla comment[1] about 
whether or not putting as much emphasis on bug fix verification was a useful 
practice or not. Having read the comment for the first time, it really got me 
wondering whether our cultural desire to verify so many bug fixes before 
releasing Firefox to the public was a prudent one.

Does verifying as many fixes as we do really raise the quality bar for Firefox?
Could the time we spend be better used elsewhere?

If I were to ballpark it, I'd guess that nearly half of the testing we do 
during Beta is for bug fix verifications. Now sure, we'll always want to have 
some level of verification (making sure security fixes and critical regressions 
are *truly* fixed is important); But maybe, just maybe, we're being a little 
too purist in our approach.

What do you think?

Anthony Hughes
Quality Engineer
Mozilla Corporation

1. https://bugzilla.mozilla.org/show_bug.cgi?id=172191#c16

_______________________________________________
dev-quality mailing list
dev-qual...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-quality
_______________________________________________
dev-quality mailing list
dev-qual...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-quality

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to