Reply on:

How are we planning to test this?  We have seen bugs in obscure web
sites which use the name of a new DOM property for example, but it seems
to me like there is no easy way for somebody to verify that adding such
a property doesn't break any popular website, even, since sometimes the
bug needs special interactions with the website to be triggered. 

Response:

You'd first crawl around thousands of sites to generate statistics on where the 
property is used in a prefixed form currently on the web (a-team btw is working 
on a tool for this). Next, you would prioritize the result list of sites using 
that prefix based on various factors such as site popularity using alexa data, 
frequency of prefix use, etc. Then, you would select a small subset of sites 
and do an exploratory test of those sites. If you notice immediately that there 
are general problems with the site, then you likely have found a web 
compatibility problem. Knowing the problem proactively gives you advantages 
such as knowing to double-check the implementation, but more importantly, 
knowing when and what level of outreach is needed.

Reply on:

I'm not quite sure why that would be useful.  If we believe that doing
blind verification is not helpful, why is doing that on a subset of bugs
fixed in a given release better? 

Response:

Probably because there are bugs that don't get flagged that should get flagged 
for testing. It can be useful a component owner to track to know what has 
landed to know what testing they need to follow up with, if any. The difference 
is that I'm not implying in generally going with "go verify that this works," 
but instead "go test these scenarios that would likely be useful to investigate 
as a result of this change."

Reply on:

I think QA should do some exploratory testing of major new features as
time allows, but just verifying existing test cases that often are
running automatically anyhow isn't a good use of time, I guess. 

Response:

Right, we should focus effort on areas not covered by automation primarily.

Reply on:

We (mostly) send Gecko developers to participate in Web
standardization. Opera (mostly) sends QA people. This results in Opera
QA having a very deep knowledge and understanding of Web standards.
(I'm not suggesting that we should stop sending Gecko developers to
participate. I think increasing QA attention on spec development could
be beneficial to us.) It seems (I'm making inferences from outside
Opera; I don't really know what's going on inside Opera) that when a
new Web platform feature is being added to Presto, Opera assigns the
QA person who has paid close attention to the standardization of the
feature to write test cases for the feature. This way, the cases that
get tested aren't limited by the imagination of the person who writes
the implementation.

So instead of verifying that patches no longer make bugs reproduce
with it steps to reproduce provided by the bug reporter, I think QA
time would be better used by getting to know a spec, writing
Mochitest-independent cross-browser test cases suitable for
contribution to an official test suite for the spec, running not only
Firefox but also other browsers against the tests and filing spec bugs
or Firefox bugs as appropriate (with the test case imported from the
official test suite to our test suite). (It's important to
sanity-check the spec by seeing what other browsers do. It would be
harmful for Firefox to change to match the spec if the spec is
fictional and Firefox already matches the other browsers.) 

Response:

I'd generally agree these are all good ideas. I've been recently exploring some 
of the ideas you propose by getting involved early with the specification and 
development work for getUserMedia and other WebRTC related parts. Providing the 
test results and general feedback immediately in the early phases of 
development and the spec process is already seems to be useful - it provides 
context into early problems, especially in the unknown areas not immediately 
identified when building the spec in the first place. I'll keep these ideas in 
mind as I continue to work with the WebRTC folks.

Reply on:

Verifications are important. I've seen way too many fixes go in across
my career that didn't really fix the bug to think that we should take
the workflow out completely, and I would never call them "blind" if
they're against a valid testcase. They might be naive, they might be
shallow, but they aren't blind. That's a misnomer. 

Response:

Right, we shouldn't take the workflow entirely. I think the general suggestion 
is to focus our efforts on the "right" bugs that are bound to dig into and find 
problems in. The reality is that we can't verify every single bug in a deep 
dive (there simply isn't enough time to do so). The blind verifications point 
being made above was more suggesting that I don't think it's a good idea to do 
a large amount of verifications with a simple point and click operation on 
every single test case, as that's low quality testing. We just need to be 
mindful of the right areas to focus on and expend energy there, which is why 
suggest flagging only a smaller subset of bugs that really deserve the focus.

In the past, I think the process we've used on desktop was that 
tracking-firefoxN bugs were looked at by QA for possible verification. I do 
generally think this is a useful metric to use, but it may not necessarily 
always equate to the right bugs to look at. We probably need to figure that out 
better and find a better way to manage it.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to