On 2014-05-22, 5:18 AM, Bas Schouten wrote:
Hi Gijs,

None of those things are true in my opinion. For what it's worth, the expected 
regression in CART was more around 20% than around 40%. The number surprises me 
a little, and we'll look into what makes CART so bad off (on our test servers) 
specifically. We haven't looked specifically at CART as we mostly looked into 
TART and Tsvgr while investigating.

I'll address each point individually:

a) Extremely hacky work for a 1% gain seems like a bad idea. Depending on how 
hacky, maintainable, etc, that seems like it might have been the wrong decision 
:-). But 1% gains can still accumulate, so I certainly wouldn't call them 
useless.
b) You have to look at what the tests are meant to do, Avi knows more about the 
tests than I do and has already supplied some information, but there's a 
difference in tweaking the UI, or a specific portion of rendering, or radically 
changing the way we draw things. The tests might not be a good reflection of 
the average user experience, but they help us catch situations where we 
unwittingly harm performance, or where we want proof that a change in some core 
algorithm does indeed make things run faster. That makes them very useful, just 
not for radical architectural changes. Fwiw, in what CART or TART is testing 
I'm not claiming it will be a net performance improvement, there are however 
other interactions that improve that are inherently linked to this one and 
cannot be decoupled (because of it being an architectural change). Similarly we 
only run our tests on one hardware configuration, this in my mind again 
stresses their purpose as a relative regression test as opposed t
o
  being representative of perceived UX.
c) A couple of things here, first of all we consulted with people outside of 
the gfx team about this, and we were in agreement with the people we talked to. 
When it comes to moving forward architecturally we should always be ready to 
accept something regressing, that has nothing to do with what team is doing the 
regression, that is related to what we're trying to accomplish. I can guarantee 
you for example, e10s will cause at least some significant regressions in some 
situations, yet we might very well have to accept those regressions to offer 
our users big improvements in other areas, as well as to pave the way for 
future improvements. With OMTA for example, several aspects of our (UI) 
performance can be improved in a way that no amount of TART improvements in the 
old architecture ever could.

Now I don't want to repeat too much of what I've already said, but I'd like to 
reiterate on the fact that if there are 'real' regressions in the overall user 
experience, we will of course attempt to address those issues, but we are also 
only a pref change away from going back to on-main-thread compositing.

The focus should, in my opinion be, what has actually been affected by this in 
a way that it has a strong, negative impact on user experience. Considering the 
scope of this change I am certain those things exist and they will need to be 
fixed. Perhaps the CART regression -is- a sign of some real unacceptable 
perceived performance, I'm not ruling that out, but we have not identified an 
interaction which significantly regressed. This is all extremely hardware 
dependent though (i.e. on most of my machines TART and CART both improve with 
OMTC), so if someone -has- seen this cause a significant performance regression 
in some sort of interaction, do let us know.

Have you tried to test those interactions on the same Talos machines that the CART regressions happened on? I think that would definitively answer the question of whether these regressions affect an interaction we care about.

(Also, out of curiosity, what is the list of those interactions?)

Cheers,
Ehsan

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to