Jay Reynolds Freeman wrote:
I have an app whose initialization includes writing a huge file to disk -- think GigaBytes, or even tens of GigaBytes. I am doing this in the context of setting up a large area of shared memory with mmap, so the big write has to happen at initialization, and it is agonizingly slow.
Simply seeking to a large position in a new or truncated file and writing will fill all intervening locations with zeros. Unix-type OSes generally guarantee that intervening space is zeros.
The most optimal thing possible is to never write zeros at all, i.e. the fastest thing possible is the thing you don't do. The next most optimal thing is to only write the exact number of zeros needed, and only when you need them.
I think you're doing this backwards. You should be looking at ways to eliminate writing zeros, especially tens of gigabytes of them, not ways of making them faster. That's because no matter what else you do, you will still be limited by the slowest link in the chain of OS, HD controller, HD, SATA, memory controller, etc. That could be in the range of 20-30 MB/sec, or even worse on older machines or USB- connected HDs. Do the math.
Exactly what problem is solved by initially writing multiple gigabytes of zeros to disk? Yes, you've zeroed multiple gigabytes of a shared file on disk, but exactly why is that necessary? What does it accomplish, specifically, and why is it gigabytes in size?
-- GG _______________________________________________ Cocoa-dev mailing list (Cocoa-dev@lists.apple.com) Please do not post admin requests or moderator comments to the list. Contact the moderators at cocoa-dev-admins(at)lists.apple.com Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com This email sent to arch...@mail-archive.com