On 01 Feb 2007 12:26:09 +0100, Artur Grabowski wrote: >[EMAIL PROTECTED] writes: > >> I just moved a 200GB hard drive from a 3.7 box to a 4.0 box, and since >> my data was all backed up, I decided to run disklabel, create a fresh >> partition that spanned the whole disk, and then run newfs on that >> partition. I expect to not have all 200GB, between the whole issue of >> poorly labeled disk sizes and the 5% reserved by default. What I don't >> expect, however, is to see ** 22% ** of my disk already in use: >> >> -bash-3.1$ df -h >> Filesystem Size Used Avail Capacity Mounted on >> /dev/wd0a 7.3G 78.9M 6.9G 1% / >> /dev/wd0d 22.0G 512M 20.4G 2% /usr >> /dev/wd0e 7.2G 6.7M 6.8G 0% /var >> /dev/wd1a 183G 38.0G 136G 22% /mnt >> >> Can anyone explain this? Have I done something wrong here? More >> importantly, is there a simple way to remedy this and get my 38GB back? > >$ bc >200000000000/(1024*1024*1024) >186 > >Talk to the marketing department of your disk manufacturer.
Uh, I think he wasn't worried about the 183G but was worried about the 38G that left him with only 136G. At least that is his question. $ bc 136*1024*1024*1024 146028888064 and that's quite a bit short of where you started. >From the land "down under": Australia. Do we look <umop apisdn> from up over?