Background: As I understand it, at some point we picked the value 319M for Flames to emulate the memory conditions for apps on the shipped devices with 256M but smaller screens (and thus smaller needs). Many points were made about various built-in heuristic band-aids that alter memory behaviour being tricked by 319 > 256, and I'm not sure they were ever addressed/corrected.

Problem: On trunk/v3.0 the email app has been getting more bugs related to the email app getting OOM-killed (usually involving activities where the email app gets aggressively killed after returning to the triggerer of a "compose" activity). These do not occur on v2.2 where the email app is basically identical, and sometimes the bugs resolve themselves.

Have we revisited the realism of 319M for v3.0?  Can we?

Relatedly, can we revisit the GC settings? One of these bugs that magically fixed itself was related to email's production of garbage during the attaching phase and subsequent sending phase of an email. Although email's memory needs are fixed during the given time-interval in the sense that size(reachable) = C, we inherently need to churn somewhat since TCPSocket doesn't take transferable ownership of our Uint8Array and I don't think we can free them ourselves. This worked out fine for Tarako, but at least now, the GC seemed to mainly just be firing on a timer and didn't seem to feel under pressure to aggressively keep our heap size down. There are of course things the email app can and will do to further reduce its memory usage[1], but it would be nice for the platform to favor slowing the app down via GC versus leaving the linux kernel to kill the app. (NB: I realize there are power usage trade-offs about too much GC.)

Andrew

1: Noting that the biggest wins will come from platform enhancements like being able to use TCPSocket in the worker rather than manually proxying all the data to the worker or being able to provide TCPSocket directly with Blobs, etc.
_______________________________________________
dev-b2g mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-b2g

Reply via email to