On 2012-08-10 20:41:30 +0000, Anthony Hughes said:

I, for one, support this idea in the hypothetical form. I'd like to get various peoples' perspectives on this issue (not just QA).


Like Robert says elsewhere, manually running a testcase that's already in automation doesn't make a huge amount of sense.

I think running a manual verification that isn't in automation does make some amount of sense. It's also probably the first thing you want to do before doing additional planning around the verification.

So my take is this:

Hypothetically, we have 300 bugs. Right now, we pick out 200 we think are testable in time allotted, spend all of our allotted time on them, get maybe 150 done.

Instead I'd define (formally or otherwise) three tiers:

1) Critical fixes. These need verification + additional testing.
2) Untested uncritical fixes. These have no automated tests. These should get verification if time allows. 3) Tested critical fixes: These have automated tests and do not need verification.

(There's an invisible fourth tier: bugs that we can't really test around because they're too internal. But those are the 100 we triaged out above.)

In our hypothetical case, what that means is that of the 200 we decided were testable, maybe 20 become tier 1. Give them whatever time is needed to do a short but comprehensive test plan around them.

Then give the balance of the time to tier 2. But don't block time within release for tier 2. If tier 1 takes everything, so be it.

Tier 3 should be ignored. They're already being tested to the point we care about for that release.

Re: necessity of verification workflow,

Verifications are important. I've seen way too many fixes go in across my career that didn't really fix the bug to think that we should take the workflow out completely, and I would never call them "blind" if they're against a valid testcase. They might be naive, they might be shallow, but they aren't blind. That's a misnomer.

The mistake is in prioritizing them above primary testing, and in binding them to a time deadline such that we prioritize them that way. Closing bugs is part of good bug maintenance. It's nice to know for sure that you don't have to look at it ever again and, unfortunately, "resolved fixed" doesn't mean that.

But it's not important that you know that -immediately- for all bugs. It's more of an ongoing task to make sure our focus is in the right place. We should not feel the pressure to do verification by a release deadline, not for the average bug.

However, we should, if we can find resources to do so, slowly chip away at the entire "resolved" base to eventually verify that they're resolved, either by a manual rerun or, better, by checking an automated result of the test that went in. First pass == verified, bug closed.

To that end, bug verifications are something we should be throwing at community members who want to help but don't have other special skills, and at people who are onboarding. Bug verifications are a great way to ramp up on a section of the product and to give back value to the project at the same time.

In the tiered plan described up above, I'd have community and newbies helping out at tier 2 in parallel with experienced QA doing tier 1.

Re: QA should be expanding automation against spec instead (per Henri's reply),

We're getting there, in terms of getting more involved with this. I'm leading a project to get QA more involved with WebAPI testing, particularly at the automated level. But the assumption that everyone in the QA community has or will have that skillset is a tall and potentially exclusionary one.

Further, there's value in both activities; manual verification covers things that can't be easily automated and, for critical bugs, gives you results much sooner than automation typically does. Automation has the greater long-term value, of course. But there's a balance.

Geo

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to