In https://bugzilla.mozilla.org/show_bug.cgi?id=894834 the e-mail app likes to die from downloading large files. I looked at how Bluetooth avoids this problem, and it seems to cheat by being efficiently implemented in C++ and just using native DeviceStorage classes directly.

From Jan Jongboom's earlier question in the thread "DeviceStorageAPI getEditable not implemented?" (https://groups.google.com/d/msg/mozilla.dev.b2g/gNiOaZV0J6s/g9Jw_GvQab8J) and checking the current code, it seems like directly streaming the writes to disk is not a viable option. Trying to use an alternate method like using IndexedDB's exposure of mozCreateFileHandle (and potentially migrating the Blob after writing is complete) also does not seem viable because IndexedDB's defined behaviour in mozilla-central off the main process is to throw (per dom/indexedDB/test/test_get_filehandle.html) when you call it. This is consistent with Jan Varga's comment in that thread that the FileHandle API does not work in child processes.

Is our best option at this time to create a series of smaller Blobs (say, 1 megabyte at a time) and write them to DeviceStorage in sequence, then later create a super-Blob out of these blobs and then kill the originals? It seems workable on paper, although I would expect this to cause an I/O storm that might affect the device's responsiveness, not to mention the introduction of potentially orphaned file downloads that would require the e-mail app to grow some type of download manager logic (which wouldn't be bad since it may be hard for us to have a stable connection to download large files).

Note that for the bug in question, I expect we might just resort to a series of stop-gap measures like not offering download of files above a certain size and doing more to ensure that we minimize our maximum live memory footprint. We will currently definitely end up with the entire base64-encoded string in memory (built incrementally using += so there could be funky rope stuff happening) at the same time the decoded string is in memory. We could reduce that via chunking the decode more. And depending whether the Blob will reuse our String's internal rep (no? or if not no, then presumably the Blob's payload gets structured clone when we sent it from our worker to the main page thread?), we could probably get the decode string into Blob form a lot faster, and possibly onto the main thread in smaller chunks faster too depending on the cost profile there.

Andrew
_______________________________________________
dev-b2g mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-b2g

Reply via email to