On Jun 4, 2016, at 14:10 , Markus Spoettl <ms_li...@shiftoption.com> wrote:
> 
> you have complete control over what individual file you read and when

That’s fine for documents where it’s helpful not to have to read files only to 
write them out unchanged on a save.

It doesn’t really help when you need to read large amounts of data. The problem 
is that NSFileWrapper doesn’t AFAIK have any API to get already-read files out 
of memory. In terms of eventual memory usage, it’s the same whether you read 1 
large file or 100 small ones.

Compounding this problem is that it’s typical for the file data to be an 
archive, which means that when you unarchive it you’re using twice the memory — 
once for the archive (a NSData object retained by the file wrapper) and once 
for the re-created objects.

This is mitigated in common use, because the files can often be mapped rather 
than read via I/O APIs, so they use address space, but not necessarily much 
memory.

I find the documentation lacking in this regard, because it fails to 
distinguish between large documents that only need to be read selectively, 
usually, and large documents that need to be read partially. NSDocument has API 
for the latter, but it’s buried pretty deep.

_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to