On 04/22/2013 11:07 AM, Justin Lebar wrote: >> I can't really agree or disagree without knowing why they use "too much" >> memory. > At the risk of sounding like a broken record, it's all in the memory > reports. You probably understand this data better than I do. Extract > and load in about:memory (button is at the bottom). > > http://people.mozilla.org/~jlebar/downloads/merged.json.xz > http://people.mozilla.org/~jlebar/downloads/unmerged.json.xz > > As I said earlier, if the JS team wants to own B2G memory usage and > commit to getting chrome JS memory usage down to C++ levels within a > timeframe that's acceptable to the B2G team, that's fantastic. > > If on the other hand the JS team is not ready to commit to getting > this work done on B2G's schedule, then by construction "wait for JS to > get better" is not a solution that works for us.
I agree: I was not suggesting that as a general solution in any way. > > Given how long some of this prerequisite work (e.g. generational > garbage collection) has been ongoing for, I'm highly dubious of the > claim that our JS engine will improve at the requisite rate. Generational GC is an extremely ambitious undertaking. We have set realistic milestones for completion and we are meeting our goal dates more often than not: the project is on schedule. Whether this means it will be done soon enough for B2G, I have no idea. What does B2G's schedule look like? > Where > we've had success reducing our JS memory in the past (e.g. bug > 798491), it's been by working within the current reality of the JS > engine, instead of twiddling our thumbs waiting for the Right Fix > (e.g. bug 759585, which did not come in time to be useful for B2G > 1.x). Agreed. Last week we finally got an actual physical Unagi posting numbers to AWFY. Nicolas is now looking into our GC tuning parameters with the goal of improving our numbers there. > > Please don't take this as a suggestion that I think you guys are doing > a bad job -- I continue to characterize the JS team's work as heroic. > I just think that there's a limit to how much we ought to expect from > the JS folks, particularly given how many other high-priority projects > you have. I did not want to suggest that rewriting some of your modules to C++ is the wrong solution, given your requirements. Sorry if my response was a bit harsh; it is extremely frustrating from our side to be told now that what we did 9 months ago was not good enough when you needed it 3 months ago. Please keep in mind that we are also attacking the same problem from the other direction and we'd very much like it if we could make our work more helpful to you. -Terrence > > On Mon, Apr 22, 2013 at 1:36 PM, Terrence Cole <tc...@mozilla.com> wrote: >> On 04/21/2013 04:51 PM, Justin Lebar wrote: >>> I think we should consider using much less JS in the parts of Gecko that are >>> used in B2G. I'd like us to consider writing new modules in C++ where >>> possible, and I'd like us to consider rewriting existing modules in C++. >>> >>> I'm only proposing a change for modules which are enabled for B2G. For >>> modules >>> which aren't enabled on B2G, I'm not proposing any change. >>> >>> What I'd like to come out of this thread is a consensus one way or another >>> as >>> to whether we continue along our current path of writing many features that >>> are >>> enabled on B2G in JS, or whether we change course. >>> >>> Since most of these features implemented in JS seem to be DOM features, I'm >>> particularly interested in the opinions of the DOM folks. I'm also >>> interested >>> in the opinions of JS folks, particularly those who know about the memory >>> usage >>> of our new JITs. >>> >>> In the remainder of this e-mail I'll first explain where our JS memory is >>> going. Then I'll address two arguments that might be made against my >>> proposal >>> to use more C++. Finally, I'll conclude by suggesting a plan of action. >>> >>> === Data === >>> >>> Right now about 50% (16mb) of the memory used by the B2G main process >>> immediately after rebooting is JS. It is my hypothesis that we could >>> greatly >>> reduce this by converting modules to C++. >>> >>> On our 256mb devices, we have about 120mb available to Gecko, so this 16mb >>> represents 13% of all memory available to B2G. >>> >>> To break down the 16mb of JS memory, 8mb is from four workers: ril_worker, >>> net_worker, wifi_worker (x2). 5mb of the 8mb is under "unused-arenas"; >>> this is >>> fragmentation in the JS heap. Based on my experience tackling >>> fragmentation in >>> the jemalloc heap, I suspect reducing this would be difficult. But even if >>> we >>> eliminated all of the fragmentation, we'd still be spending 3mb on these >>> four >>> workers, which I think is likely far more than we need. >> Once exact rooting of the browser is complete we can implement heap >> defragmentation easily. Generational GC should help as well here. >> >>> The other 8mb is everything else in the system compartment (all our JSMs, >>> XPCOM components, etc). In a default B2G build you don't get a lot of >>> insight >>> into this, because most of the system compartments are squished together to >>> save >>> memory (bug 798491). If I set jsloader.reuseGlobal to false, the amount of >>> memory used increases from 8mb to 15mb, but now we can see where it's going. >>> >>> The list of worst offenders follows, but because this data was collected >>> with >>> reuseGlobal turned off, apply generous salt. >>> >>> 0.74 MB modules/Webapps.jsm >>> 0.59 MB anonymous sandbox from devtools/dbg-server.jsm:41 >>> 0.53 MB components/SettingsManager.js >>> 0.53 MB chrome://browser/content/shell.xul >>> 0.49 MB components/WifiWorker.js >>> 0.43 MB modules/DOMRequestHelper.jsm >>> 0.38 MB modules/XPCOMUtils.jsm >>> 0.34 MB RadioInterfaceLayer.js >>> 0.31 MB AppsUtils.jsm >>> 0.27 MB Webapps.js >>> 0.22 MB BrowserElementParent.jsm >>> 0.21 MB app://system.gaiamobile.org/index.html >>> >>> Many (but certainly not all) of these modules could be rewritten in C++. >>> >>> Beyond this list, it's death by a thousand cuts; there are 100 compartments >>> in >>> there, and they each cost a small amount. >>> >>> I've attached two about:memory dumps collected on my hamachi device soon >>> after >>> reboot, so you can examine the situation more closely, if you like. >>> merged.json was collected with the default config, and unmerged.json was >>> collected with jsloader.reuseGlobal set to false. >>> >>> Download and extract these files and then open them with the button at >>> the bottom >>> of about:memory in Nightly. >>> >>> (Before you ask: Most of the heap-unclassified in these dumps is >>> graphics memory, >>> allocated in drivers.) >>> >>> === Should we use JS because it's nicer than C++? === >>> >>> I recognize that in many ways JS is a more convenient language than C++. >>> But >>> that's besides the point here. The point is that in the environment we're >>> targeting, our implementation of JS is too heavyweight. We can either fix >>> our >>> implementation or use less JS, but we can't continue using as much JS as we >>> like without doing one of these two things. >>> >>> === Why not just make JS slimmer? === >>> >>> It's been suggested to me that instead of converting existing and future JS >>> code to C++, we should focus on making our JS engine slimmer. Such changes >>> would of course have the advantage of improving our handling of web content >>> on >>> B2G. >>> >>> I'm absolutely in favor of reducing JS memory usage, but I see this effort >>> as >>> orthogonal to the question of rewriting our current code and writing our >>> future >>> code in C++, for a few reasons. >>> >>> 1. Content JS does not run in the B2G main process, where the impact of high >>> memory usage is strongest. We can probably tolerate higher memory usage for >>> content JS than we can for main-process code. I think it makes sense for >>> our >>> JS team to focus their effort on optimizing for content JS, since that's far >>> more widespread. >>> >>> 2. We have a large team of B2G engineers, some of whom could work >>> exclusively >>> on converting components from JS to C++. In contrast, we have a relatively >>> small team of JS engineers, few of whom can work exclusively on optimizing >>> the >>> JS engine for B2G's use-cases. >> Our exact rooting work is at a spot right now where we could easily use >> more hands to accelerate the process. The main problem is that the work >> is easy and tedious: a hard sell for pretty much any hacker at mozilla. >> Once this work is complete, however, we can get some level of >> defragmentation working in a matter of weeks and generational in a >> slightly longer time frame. >> >>> 3. I know people get in trouble at Mozilla for suggesting that it's >>> impossible >>> to do anything in JS, so I won't do that, but it seems to me that the >>> dynamic >>> semantics of JS make it very difficult to achieve the same degree of memory >>> density as we do with C++. (We're talking about density of program data as >>> well as code here.) >> Are you saying JS is not perfect in every way? Release the hounds! :-) >> >> More seriously, JS does have all the tools you need to get very close in >> memory or execution speed to C++. The question of which of those tools >> is easiest to use in any specific situation is more interesting. >> >>> At the very least, I'm pretty sure it's straightforward to significantly >>> reduce >>> our memory usage by rewriting code in C++, while it would probably take >>> engineering heroics to approach the same level of memory usage by modifying >>> the >>> JS engine. I don't think it's wise to bet the product on heroics, >>> given an alternative. >> Well, I have to disagree a bit. I know Monkey Island is a weird and >> special place to work, but the impossible does seem to happen here on a >> strangely regular basis. Usually it even happens without any obvious >> heroics. >> >>> === Conclusion === >>> >>> If we think that 256mb is a fad, then our current trajectory is probably >>> sustainable. But everything I have heard from management suggests that we >>> are >>> serious about 256mb for the foreseeable future. >>> >>> If we anticipate shipping on 256mb devices for some time, I think our rate >>> of >>> adding features written in JS is unsustainable. I think we should shift the >>> default language for implementation of DOM APIs from JS to C++, and we >>> should >>> rewrite the parts of the platform that run on B2G in C++, where possible. >>> >>> I'd start by converting these four workers. Do we agree this is a place to >>> start? >> I can't really agree or disagree without knowing why they use "too much" >> memory. What I can say is that doing a complete rewrite is almost never >> the right answer, particularly if we don't understand what we are trying >> to solve in more detail. >> >> -Terrence >> >>> -Justin >>> >>> >>> >>> _______________________________________________ >>> dev-platform mailing list >>> dev-platform@lists.mozilla.org >>> https://lists.mozilla.org/listinfo/dev-platform >>> >> _______________________________________________ >> dev-platform mailing list >> dev-platform@lists.mozilla.org >> https://lists.mozilla.org/listinfo/dev-platform _______________________________________________ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform