I filed bug 876173[1] about this a long time ago. Recently, I talked to
Gabor, who's started looking into enabling multiple content processes.

One other thing we should be able to do is sharing the self-hosting
compartment as we do between runtimes within a process. It's not that big,
but it's not nothing, either.

till


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=876173

On Tue, Mar 15, 2016 at 4:34 AM, Nicholas Nethercote <n.netherc...@gmail.com
> wrote:

> Greetings,
>
> erahm recently wrote a nice blog post with measurements showing the
> overhead of
> enabling multiple content processes:
>
> http://www.erahm.org/2016/02/11/memory-usage-of-firefox-with-e10s-enabled/
>
> The overhead is high -- 8 content processes *doubles* our physical memory
> usage -- which limits the possibility of increasing the number of content
> processes beyond a small number. Now I've done some follow-up
> measurements to find out what is causing the per-content-process overhead.
>
> I did this by measuring memory usage with four trivial web pages open,
> first
> with a single content process, then with four content processes, and then
> getting the diff between content processes of the two. (about:memory's diff
> algorithm normalizes PIDs in memory reports as "NNN" so multiple content
> processes naturally get collapsed together, which in this case is exactly
> what
> we want.) I call this the "small processes" measurement.
>
> If we divide the memory usage increase by 3 (the increase in the number of
> content processes) we get a rough measure of the minimum per-content
> process
> overhead.
>
> I then did a similar thing but with four more complex web pages (gmail,
> Google
> Docs, TreeHerder, Bugzilla). I call this the "large processes" measurement.
>
>
[ lots of analysis omitted to not get caught in the 40kb+ moderation queue ]

-----------------------------------------------------------------------------
> Conclusion
>
> -----------------------------------------------------------------------------
>
> The overhead per content process is significant. I can see scope for
> moderate
> improvements, but I'm having trouble seeing how big improvements can be
> made.
> Without big improvements, scaling the number of content processes beyond
> 4 (*maybe* 8) won't be possible.
>
> - JS overhead is the biggest factor. We execute a lot of JS code just
> starting
>   up for each content process -- can that be reduced? We should also
> consider a
>   smaller nursery size limit for content processes.
>
> - Heap overhead is significant. Reducing the page-cache size could save a
>   couple of MiBs. Improvements beyond that are hard. Turning on jemalloc4
>   *might* help a bit, but I wouldn't bank on it, and there are other
>   complications with that.
>
> - Static data is a big chunk. It's hard to make much of a dent there
> because
>   it has a *very* long tail.
>
> - The remaining buckets are a lot smaller.
>
> I'm happy to gives copies of the raw data files to anyone who wants to look
> at
> them in more detail.
>
> Nick
> _______________________________________________
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to