hmm, on Sun, Jun 03, 2012 at 01:39:18PM +0200, Tobias Ulmer said that > > these must be some really nice disks :] > > > > for example only a 200G slice (also 64k/8k) of music/film/picture > > collection (not even full yet) on a notebook disk (5400 RPM) takes ages: > > > > Filesystem Size Used Avail Capacity iused ifree %iused Mounted > > on > > /dev/sd0d 217G 153G 63.5G 71% 44815 7197423 1% /data > > > > $ time sudo fsck -f /dev/sd0d > > ** /dev/rsd0d > > ** File system is already clean > > ** Last Mounted on /data > > ** Phase 1 - Check Blocks and Sizes > > ** Phase 2 - Check Pathnames > > ** Phase 3 - Check Connectivity > > ** Phase 4 - Check Reference Counts > > ** Phase 5 - Check Cyl groups > > 44815 files, 20076091 used, 8329340 free (13748 frags, 1039449 blocks, 0.0% > > fragmentation) > > 4m58.26s real 0m22.50s user 0m7.28s system
at 71% disk usage having 1% inode usage, would it be a logical idea to radically slash the number of inodes, perhaps by 50%, even more? if i had 50% of the current total inodes, would the fsck time be halved? for some reason it seemed logical that checking free inodes will be much faster then used ones... > This comes down to the FFS1 vs FFS2 difference. Newfs will select FFS2 > for bigger filesystems, reducing fsck times significantly at the expense > of more efficient disk space allocation in FFS1. by efficient disk space allocation you mean fragmentation? are there any numbers comparing FFS1 to FFS2 in this regard? would there be a perceptible (negative) effect of using FFS2 on slices smaller than 1TB? -f -- experience is nothing but a lot of mistakes.