On Wednesday, July 11, 2018 at 4:19:15 AM UTC+10, Kris Maglione wrote:
> [...]
> Essentially what this means, though, is that if we identify an area of 
> overhead that's 50KB[3] or larger that can be eliminated, it *has* to be 
> eliminated. There just aren't that many large chunks to remove. They all need 
> to go. And if an area of code has a dozen 5KB chunks that can be eliminated, 
> maybe they don't all have to go, but at least half of them do. The more the 
> better.

Some questions: -- Sorry if some of this is already common knowledge or has 
been discussed.

Are there tools available, that could easily track memory usage of specific 
things?
E.g., could I instrument one class, so that every allocation would be tracked 
automatically, and I'd get nice stats at the end?
Including wasted space because of larger allocation blocks?

Could I even run what-if scenarios, where I could instrument a class and 
extract its current size but also provide an alternate size (based on what I 
think I could make it shrink), and in the end I'll know how much I could save 
overall?

Do we have Try tests that simulate real-world usage, so we could collect 
memory-usage data that's relevant to our users, but also reproducible?

Should there be some kind of Talos-like CI tests that focus on memory usage, so 
we'd get some warning if a particular patch suddenly eats too much memory?
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to