Re: WheelEvent of DOM Level 3 Events now landed

2012-08-13 Thread Neil

Masayuki Nakano wrote:


On 2012/08/13 4:57, Neil wrote:


it seems as if you can't make the wheel scroll more slowly any more?


Currently, yes. The reason for not supporting slower scrolling isn't 
technical reason. It needs some additional code. E.g., 0.5px scroll 
isn't supported by current ESM.


Do you just ignore left-over fractions of pixels created e.g by 
scrolling 37% faster (delta_multiplier = 137)?


--
Warning: May contain traces of nuts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WheelEvent of DOM Level 3 Events now landed

2012-08-13 Thread Masayuki Nakano

On 2012/08/13 17:32, Neil wrote:

Masayuki Nakano wrote:


On 2012/08/13 4:57, Neil wrote:


it seems as if you can't make the wheel scroll more slowly any more?


Currently, yes. The reason for not supporting slower scrolling isn't
technical reason. It needs some additional code. E.g., 0.5px scroll
isn't supported by current ESM.


Do you just ignore left-over fractions of pixels created e.g by
scrolling 37% faster (delta_multiplier = 137)?


Yes.
http://hg.mozilla.org/mozilla-central/annotate/038266727ddc/content/events/src/nsEventStateManager.cpp#l2854

--
Masayuki Nakano 
Manager, Internationalization, Mozilla Japan.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Verification Culture

2012-08-13 Thread Henri Sivonen
On Fri, Aug 10, 2012 at 11:40 PM, Anthony Hughes  wrote:

I'm commenting only from the point of view of developing Web-exposed
features into Gecko. I don't have sufficient experience to comment on
QA practices as they relate to Firefox UI development, etc.

> Does verifying as many fixes as we do really raise the quality bar for 
> Firefox?

I think verifying that the steps to reproduce that the bug reporter
stated no longer reproduce the bug is not a good use of QA time. While
it is possible for the developer to make a mistake, most of the time
bugs don't get marked FIXED without actually landing something and
developers test their patches with the steps to reproduce that were
reported. Therefore, I would expect it to be very unlikely for
verification of the original steps to reproduce to result in a
quality-improving action.

> Could the time we spend be better used elsewhere?

I think we have a lot to learn from Opera here.

When we develop Web-exposed features in Gecko, typically the test
cases that get landed together with the patch are written by the same
developer who wrote the patch and QA isn't involved at all. This means
that the testing of the code is limited by the imagination of the
person who wrote the code being tested. If the person who wrote the
code didn't think of handling an edge case in the code, (s)he probably
didn't think of the edge case enough to write a test for it.

We (mostly) send Gecko developers to participate in Web
standardization. Opera (mostly) sends QA people. This results in Opera
QA having a very deep knowledge and understanding of Web standards.
(I'm not suggesting that we should stop sending Gecko developers to
participate. I think increasing QA attention on spec development could
be beneficial to us.) It seems (I'm making inferences from outside
Opera; I don't really know what's going on inside Opera) that when a
new Web platform feature is being added to Presto, Opera assigns the
QA person who has paid close attention to the standardization of the
feature to write test cases for the feature. This way, the cases that
get tested aren't limited by the imagination of the person who writes
the implementation.

So instead of verifying that patches no longer make bugs reproduce
with it steps to reproduce provided by the bug reporter, I think QA
time would be better used by getting to know a spec, writing
Mochitest-independent cross-browser test cases suitable for
contribution to an official test suite for the spec, running not only
Firefox but also other browsers against the tests and filing spec bugs
or Firefox bugs as appropriate (with the test case imported from the
official test suite to our test suite). (It's important to
sanity-check the spec by seeing what other browsers do. It would be
harmful for Firefox to change to match the spec if the spec is
fictional and Firefox already matches the other browsers.)

-- 
Henri Sivonen
hsivo...@iki.fi
http://hsivonen.iki.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Trying to build B2G for nexus s 4g

2012-08-13 Thread bmwracer0
Does 3g/calls/sms work on the ns4g yet?

On Tuesday, August 7, 2012 8:42:59 AM UTC-4, mohamme...@gmail.com wrote:
> Hi Chris,
> 
> 
> 
> I would be able to help you out. Please link me to the build that you 
> compiled for ns4g.
> 
> 
> 
> Thanks
> 
> Wajee
> 
> 
> 
> 
> 
> On Tuesday, April 3, 2012 8:08:31 PM UTC+5:30, warriorforGod wrote:
> 
> > I am able to build with the make-config-nexuss4g, make gonk, and
> 
> > 
> 
> > make.  I can then use make-flashonly-crespo4g to load B2G on my
> 
> > 
> 
> > phone.  It boots, however the touchscreen doesn't work.  Here is a
> 
> > 
> 
> > logcat of the bootup process.  Anybody got any ideas?
> 
> > 
> 
> > 
> 
> > 
> 
> > 
> 
> > 
> 
> > 
> 
> > 
> 
> >  1. - waiting for device -
> 
> > 
> 
> > 2. - beginning of /dev/log/main
> 
> > 
> 
> > 3. I/DEBUG   (   73): debuggerd: Mar 29 2012 15:47:50
> 
> > 
> 
> > 4. - beginning of /dev/log/system
> 
> > 
> 
> > 5. I/Vold(   71): Vold 2.1 (the revenge) firing up
> 
> > 
> 
> > 6. I/Netd(   72): Netd 1.0 starting
> 
> > 
> 
> > 7. D/Vold(   71): Volume sdcard state changing -1
> 
> > 
> 
> > (Initializing) -> 0 (No-Media)
> 
> > 
> 
> > 8. D/Vold(   71): Volume sdcard state changing 0 (No-Media) ->
> 
> > 
> 
> > 2 (Pending)
> 
> > 
> 
> > 9. D/Vold(   71): Volume sdcard state changing 2 (Pending) ->
> 
> > 
> 
> > 1 (Idle-Unmounted)
> 
> > 
> 
> >10. I/(   78): ServiceManager: 0xad50
> 
> > 
> 
> >11. D/AudioHardwareInterface(   78): setMode(NORMAL)
> 
> > 
> 
> >12. I/CameraService(   78): CameraService started (pid=78)
> 
> > 
> 
> >13. I/AudioFlinger(   78): AudioFlinger's thread 0xcb08 ready to
> 
> > 
> 
> > run
> 
> > 
> 
> >14. D/AudioHardware(   78): AudioStreamOutALSA::setParameters()
> 
> > 
> 
> > routing=2
> 
> > 
> 
> >15. D/AudioHardware(   78): ### setVoiceVolume_l
> 
> > 
> 
> >16. E/profiler(   77): Registering start/stop signal
> 
> > 
> 
> >17. E/AKMD2   (   77): libkam : Unable to load settings file, using
> 
> > 
> 
> > defaults.
> 
> > 
> 
> >18. I/ServiceManager(   77): Waiting for service batteryinfo...
> 
> > 
> 
> >19. I/Gonk(   77): Socket open for RIL
> 
> > 
> 
> >20. I/Gecko   (   77): -*- WifiWorker component: Wifi starting
> 
> > 
> 
> >21. D/FramebufferNativeWindow(   77): mNumBuffers = 2
> 
> > 
> 
> >22. I/Gecko   (   77): Logging GL tracing output to (null)/
> 
> > 
> 
> > firefox.trace
> 
> > 
> 
> >23. I/Gecko   (   77): Attempting load of /data/local/egltrace.so
> 
> > 
> 
> >  24. I/Gecko   (   77): Attempting load of libEGL.so
> 
> > 
> 
> >25. D/libEGL  (   77): loaded /system/lib/egl/libGLES_android.so
> 
> > 
> 
> >26. D/libEGL  (   77): loaded /vendor/lib/egl/
> 
> > 
> 
> > libEGL_POWERVR_SGX540_120.so
> 
> > 
> 
> >27. D/libEGL  (   77): loaded /vendor/lib/egl/
> 
> > 
> 
> > libGLESv1_CM_POWERVR_SGX540_120.so
> 
> > 
> 
> >28. D/libEGL  (   77): loaded /vendor/lib/egl/
> 
> > 
> 
> > libGLESv2_POWERVR_SGX540_120.so
> 
> > 
> 
> >29. I/Gecko   (   77): Extensions: EGL_KHR_image EGL_KHR_image_base
> 
> > 
> 
> > EGL_KHR_image_pixmap EGL_ANDROID_image_native_buffer
> 
> > 
> 
> > EGL_ANDROID_swap_rectangle  0x45
> 
> > 
> 
> >30. I/Gecko   (   77): Extensions length: 113
> 
> > 
> 
> >31. D/EventHub(   77): No input device configuration file found for
> 
> > 
> 
> > device 'compass'.
> 
> > 
> 
> >32. D/EventHub(   77): No input device configuration file found for
> 
> > 
> 
> > device 'cypress-touchkey'.
> 
> > 
> 
> >33. E/Keyboard(   77): Could not determine key map for device
> 
> > 
> 
> > 'cypress-touchkey' and no default key maps were found!
> 
> > 
> 
> >34. I/EventHub(   77): New device: id=2, fd=45, path='/dev/input/
> 
> > 
> 
> > event5', name='cypress-touchkey', classes=0x1, configuration='',
> 
> > 
> 
> > keyLayout='/system/usr/keylayout/cypress-touchkey.kl',
> 
> > 
> 
> >keyCharacterMap='', builtinKeyboard=false
> 
> > 
> 
> >35. D/EventHub(   77): No input device configuration file found for
> 
> > 
> 
> > device 'lightsensor-level'.
> 
> > 
> 
> >36. D/EventHub(   77): No input device configuration file found for
> 
> > 
> 
> > device 'proximity'.
> 
> > 
> 
> >37. D/EventHub(   77): No input device configuration file found for
> 
> > 
> 
> > device 'herring-keypad'.
> 
> > 
> 
> >38. E/Keyboard(   77): Could not determine key map for device
> 
> > 
> 
> > 'herring-keypad' and no default key maps were found!
> 
> > 
> 
> >39. I/EventHub(   77): New device: id=5, fd=46, path='/dev/input/
> 
> > 
> 
> > event2', name='herring-keypad', classes=0x1, configuration='',
> 
> > 
> 
> > keyLayout='/system/usr/keylayout/herring-keypad.kl',
> 
> > 
> 
> > keyCharacterMap='',
> 
> > 
> 
> >builtinKeyboard=false
> 
> > 
> 
> >40. D/EventHub(   77): No input device configuration file found for
> 
> > 
> 
> > device 'gyro'.
> 
> > 
> 
> >41. I/EventHub(

Re: Verification Culture

2012-08-13 Thread anthony . s . hughes
On Friday, August 10, 2012 1:41:30 PM UTC-7, Anthony Hughes wrote:
> Sorry, this should have went to dev-platform...
> 
> 
> 
> - Original Message -
> 
> From: "Anthony Hughes" 
> 
> To: "dev-planning" 
> 
> Cc: dev-qual...@lists.mozilla.org
> 
> Sent: Friday, August 10, 2012 1:40:15 PM
> 
> Subject: Fwd: Verification Culture
> 
> 
> 
> I started this discussion on dev-quality[1] but there has been some 
> suggestion that the dev-planning list is more appropriate so I'm moving the 
> discussion here. There's been a couple of great responses to the dev-quality 
> thread so far but I won't repost them here verbatim. The general concensus in 
> the feedback was that QA spending a lot of time simply verifying that the 
> immediate test conditions (or test case) is not a valuable practice. It was 
> suggested that it would be a far more valuable use of QA's time and be of 
> greater benefit to the quality of our product if we pulled out a subset of 
> "critical" issues and ran deep-diving sprints around those issues to touch on 
> edge-cases.
> 
> 
> 
> I, for one, support this idea in the hypothetical form. I'd like to get 
> various peoples' perspectives on this issue (not just QA).
> 
> 
> 
> Thank you do David Baron, Ehsan Akhgari, Jason Smith, and Boris Zbarsky for 
> the feedback that was the catalyst for me starting this discussion. For 
> reference, it might help to have a look at my dev-planning post[2] which 
> spawned the dev-quality post, which in turn has spawned the post you are now 
> reading.
> 
> 
> 
> Anthony Hughes
> 
> Mozilla Quality Engineer
> 
> 
> 
> 1. https://groups.google.com/forum/#!topic/mozilla.dev.quality/zpK52mDE2Jg
> 
> 2. https://groups.google.com/forum/#!topic/mozilla.dev.planning/15TSrCbakEc
> 
> 
> 
> - Forwarded Message -
> 
> From: "Anthony Hughes" 
> 
> To: dev-qual...@lists.mozilla.org
> 
> Sent: Thursday, August 9, 2012 5:14:02 PM
> 
> Subject: Verification Culture
> 
> 
> 
> Today, David Baron brought to my attention an old bugzilla comment[1] about 
> whether or not putting as much emphasis on bug fix verification was a useful 
> practice or not. Having read the comment for the first time, it really got me 
> wondering whether our cultural desire to verify so many bug fixes before 
> releasing Firefox to the public was a prudent one.
> 
> 
> 
> Does verifying as many fixes as we do really raise the quality bar for 
> Firefox?
> 
> Could the time we spend be better used elsewhere?
> 
> 
> 
> If I were to ballpark it, I'd guess that nearly half of the testing we do 
> during Beta is for bug fix verifications. Now sure, we'll always want to have 
> some level of verification (making sure security fixes and critical 
> regressions are *truly* fixed is important); But maybe, just maybe, we're 
> being a little too purist in our approach.
> 
> 
> 
> What do you think?
> 
> 
> 
> Anthony Hughes
> 
> Quality Engineer
> 
> Mozilla Corporation
> 
> 
> 
> 1. https://bugzilla.mozilla.org/show_bug.cgi?id=172191#c16
> 
> 
> 
> ___
> 
> dev-quality mailing list
> 
> dev-qual...@lists.mozilla.org
> 
> https://lists.mozilla.org/listinfo/dev-quality

I'm seeing a lot of good ideas and perspectives here. Thank you everyone who 
has commented so far. Keep it up.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Verification Culture

2012-08-13 Thread anthony . s . hughes
On Sunday, August 12, 2012 5:51:43 AM UTC-7, Robert Kaiser wrote:
> Jason Smith schrieb:
> 
> > Note - I still think it's useful for a QA driver to look through a set
> 
> > of bugs fixed for a certain Firefox release, it's just the process would
> 
> > be re-purposed for flagging a bug for needing more extensive testing for
> 
> > X purpose (e.g. web compatibility).
> 
> 
> 
> I think QA should do some exploratory testing of major new features as 
> 
> time allows, but just verifying existing test cases that often are 
> 
> running automatically anyhow isn't a good use of time, I guess.
> 
> 
> 
> Robert Kaiser

We do exploratory as well as test-cased testing for all features scoped for a 
given release. The real gap in our existing strategy lies with mostly doing 
blind verification of fixed bugs with the status flag set.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Verification Culture

2012-08-13 Thread Geo Mealer

On 2012-08-10 20:41:30 +, Anthony Hughes said:

I, for one, support this idea in the hypothetical form. I'd like to get 
various peoples' perspectives on this issue (not just QA).



Like Robert says elsewhere, manually running a testcase that's already 
in automation doesn't make a huge amount of sense.


I think running a manual verification that isn't in automation does 
make some amount of sense. It's also probably the first thing you want 
to do before doing additional planning around the verification.


So my take is this:

Hypothetically, we have 300 bugs. Right now, we pick out 200 we think 
are testable in time allotted, spend all of our allotted time on them, 
get maybe 150 done.


Instead I'd define (formally or otherwise) three tiers:

1) Critical fixes. These need verification + additional testing.
2) Untested uncritical fixes. These have no automated tests. These 
should get verification if time allows.
3) Tested critical fixes: These have automated tests and do not need 
verification.


(There's an invisible fourth tier: bugs that we can't really test 
around because they're too internal. But those are the 100 we triaged 
out above.)


In our hypothetical case, what that means is that of the 200 we decided 
were testable, maybe 20 become tier 1. Give them whatever time is 
needed to do a short but comprehensive test plan around them.


Then give the balance of the time to tier 2. But don't block time 
within release for tier 2. If tier 1 takes everything, so be it.


Tier 3 should be ignored. They're already being tested to the point we 
care about for that release.


Re: necessity of verification workflow,

Verifications are important. I've seen way too many fixes go in across 
my career that didn't really fix the bug to think that we should take 
the workflow out completely, and I would never call them "blind" if 
they're against a valid testcase. They might be naive, they might be 
shallow, but they aren't blind. That's a misnomer.


The mistake is in prioritizing them above primary testing, and in 
binding them to a time deadline such that we prioritize them that way. 
Closing bugs is part of good bug maintenance. It's nice to know for 
sure that you don't have to look at it ever again and, unfortunately, 
"resolved fixed" doesn't mean that.


But it's not important that you know that -immediately- for all bugs. 
It's more of an ongoing task to make sure our focus is in the right 
place. We should not feel the pressure to do verification by a release 
deadline, not for the average bug.


However, we should, if we can find resources to do so, slowly chip away 
at the entire "resolved" base to eventually verify that they're 
resolved, either by a manual rerun or, better, by checking an automated 
result of the test that went in. First pass == verified, bug closed.


To that end, bug verifications are something we should be throwing at 
community members who want to help but don't have other special skills, 
and at people who are onboarding. Bug verifications are a great way to 
ramp up on a section of the product and to give back value to the 
project at the same time.


In the tiered plan described up above, I'd have community and newbies 
helping out at tier 2 in parallel with experienced QA doing tier 1.


Re: QA should be expanding automation against spec instead (per Henri's reply),

We're getting there, in terms of getting more involved with this. I'm 
leading a project to get QA more involved with WebAPI testing, 
particularly at the automated level. But the assumption that everyone 
in the QA community has or will have that skillset is a tall and 
potentially exclusionary one.


Further, there's value in both activities; manual verification covers 
things that can't be easily automated and, for critical bugs, gives you 
results much sooner than automation typically does. Automation has the 
greater long-term value, of course. But there's a balance.


Geo

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Verification Culture

2012-08-13 Thread Geo Mealer

On 2012-08-13 21:08:04 +, Geo Mealer said:


Instead I'd define (formally or otherwise) three tiers:

1) Critical fixes. These need verification + additional testing.
2) Untested uncritical fixes. These have no automated tests. These 
should get verification if time allows.
3) Tested critical fixes: These have automated tests and do not need 
verification.



Correcting my typo,

3) Tested uncritical fixes: …

Geo

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Verification Culture

2012-08-13 Thread Justin Dolske

On 8/12/12 5:51 AM, Robert Kaiser wrote:


I think QA should do some exploratory testing of major new features as
time allows, but just verifying existing test cases that often are
running automatically anyhow isn't a good use of time, I guess.


This is something that I think could very much be helpful.

New features -- and invasive or broad changes to existing features -- 
are prime targets for focused testing. I would posit that regressions 
are more frequent in related/interdependent code that's at least a 
slight distance from the specific bug being fixed.


Justin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Verification Culture

2012-08-13 Thread Jason Smith
Reply on:

How are we planning to test this?  We have seen bugs in obscure web
sites which use the name of a new DOM property for example, but it seems
to me like there is no easy way for somebody to verify that adding such
a property doesn't break any popular website, even, since sometimes the
bug needs special interactions with the website to be triggered. 

Response:

You'd first crawl around thousands of sites to generate statistics on where the 
property is used in a prefixed form currently on the web (a-team btw is working 
on a tool for this). Next, you would prioritize the result list of sites using 
that prefix based on various factors such as site popularity using alexa data, 
frequency of prefix use, etc. Then, you would select a small subset of sites 
and do an exploratory test of those sites. If you notice immediately that there 
are general problems with the site, then you likely have found a web 
compatibility problem. Knowing the problem proactively gives you advantages 
such as knowing to double-check the implementation, but more importantly, 
knowing when and what level of outreach is needed.

Reply on:

I'm not quite sure why that would be useful.  If we believe that doing
blind verification is not helpful, why is doing that on a subset of bugs
fixed in a given release better? 

Response:

Probably because there are bugs that don't get flagged that should get flagged 
for testing. It can be useful a component owner to track to know what has 
landed to know what testing they need to follow up with, if any. The difference 
is that I'm not implying in generally going with "go verify that this works," 
but instead "go test these scenarios that would likely be useful to investigate 
as a result of this change."

Reply on:

I think QA should do some exploratory testing of major new features as
time allows, but just verifying existing test cases that often are
running automatically anyhow isn't a good use of time, I guess. 

Response:

Right, we should focus effort on areas not covered by automation primarily.

Reply on:

We (mostly) send Gecko developers to participate in Web
standardization. Opera (mostly) sends QA people. This results in Opera
QA having a very deep knowledge and understanding of Web standards.
(I'm not suggesting that we should stop sending Gecko developers to
participate. I think increasing QA attention on spec development could
be beneficial to us.) It seems (I'm making inferences from outside
Opera; I don't really know what's going on inside Opera) that when a
new Web platform feature is being added to Presto, Opera assigns the
QA person who has paid close attention to the standardization of the
feature to write test cases for the feature. This way, the cases that
get tested aren't limited by the imagination of the person who writes
the implementation.

So instead of verifying that patches no longer make bugs reproduce
with it steps to reproduce provided by the bug reporter, I think QA
time would be better used by getting to know a spec, writing
Mochitest-independent cross-browser test cases suitable for
contribution to an official test suite for the spec, running not only
Firefox but also other browsers against the tests and filing spec bugs
or Firefox bugs as appropriate (with the test case imported from the
official test suite to our test suite). (It's important to
sanity-check the spec by seeing what other browsers do. It would be
harmful for Firefox to change to match the spec if the spec is
fictional and Firefox already matches the other browsers.) 

Response:

I'd generally agree these are all good ideas. I've been recently exploring some 
of the ideas you propose by getting involved early with the specification and 
development work for getUserMedia and other WebRTC related parts. Providing the 
test results and general feedback immediately in the early phases of 
development and the spec process is already seems to be useful - it provides 
context into early problems, especially in the unknown areas not immediately 
identified when building the spec in the first place. I'll keep these ideas in 
mind as I continue to work with the WebRTC folks.

Reply on:

Verifications are important. I've seen way too many fixes go in across
my career that didn't really fix the bug to think that we should take
the workflow out completely, and I would never call them "blind" if
they're against a valid testcase. They might be naive, they might be
shallow, but they aren't blind. That's a misnomer. 

Response:

Right, we shouldn't take the workflow entirely. I think the general suggestion 
is to focus our efforts on the "right" bugs that are bound to dig into and find 
problems in. The reality is that we can't verify every single bug in a deep 
dive (there simply isn't enough time to do so). The blind verifications point 
being made above was more suggesting that I don't think it's a good idea to do 
a large amount of verifications with a simple point and click operation on 
every si