On Sun, Apr 20, 2008 at 03:35:13PM -0400, Chris Zakelj wrote: > Matthew Weigel wrote: >> Chris Zakelj wrote: >> >>> ... I'm wondering if thought is being given on how to make the physical >>> size (not filesystem... I totally understand why those should be kept >>> small) limitation of http://www.openbsd.org/faq/faq14.html#LargeDrive >> http://www.openbsd.org/43.html >> >> "New Functionality: >> ... >> o The ffs layer is now 64-bit disk block address clean. This means that >> disks, partitions and filesystems larger than 2TB are now supported, with >> the exception of statfs(2) and quotas." >> >> So, yes, thought is being given... > Sweet... I missed that when I did my quick reading of the new features. Is > it safe to assume the guideline of 1M RAM per 1G of file system to do a > reasonable fsck is still valid?
It's a bit of an overestimate for the default block and fragment sizes. The main factor for fsck memory needs is the number of inodes in the fileystem being checked. I have a 4TB test filesystem here using the max block and fragment sizes of both 64k. The filesystem has about 17M inodes, and needs about 75M of mem to fsck. Another filesystem (size 48M, using 16k block and 2k fragments) has about 6.5M inodes and needs about 30M of memory. The inode -> mem usage factor is linear: if you double the inodes, you'll need twice the memory. Soon you will hit the maximum data size a process can have. -Otto >> a non-issue on 64-bit platforms >> >> Whether a system is 64-bit or not isn't very relevant to this - that >> mostly establishes what the memory address space is, *not* the size of >> integers that can be used by the system. > Ok... insufficient understanding on my part there :)