:Hmmm... I was just having a little fun, and I think that someone's
:using the wrong type of integer somewhere:
:
:[1:23:323]root@news:~> vnconfig -e -s labels -S 1t vn0
:[1:24:324]root@news:~> disklabel -r -w vn0 auto
:[1:25:325]root@news:~> newfs /dev/vn0c
:preposterous size -2147483648
:
:Dave.

    Heh heh.  Yes, newfs has some overflows inside it when
    you get that big.  Also, you'll probably run out of swap just
    newfs'ing the metadata, you need to use a larger block size,
    large -c value, and a large bytes/inode (-i) value.  But then,
    of course, you are likely to run out of swap trying to write out
    a large file even if you do manage to newfs it.

    I had a set of patches for newfs a year or two ago but never
    incorporated them.  We'd have to do a run-through on newfs
    to get it to newfs a swap-backed (i.e. 4K/sector) 1TB filesystem.

    Actually, this brings up a good point.  Drive storage is beginning
    to reach the limitations of FFS and our internal (512 byte/block)
    block numbering scheme.  IBM is almost certain to come out with their
    500GB hard drive sometime this year.  We should probably do a bit
    of cleanup work to make sure that we can at least handle FFS's
    theoretical limitations for real.

    vnconfig -e -s labels -S 900g vn0
    newfs -i 1048576 -f 8192 -b 65536 -c 100 /dev/vn0c

    mobile:/home/dillon> pstat -s
    Device          1K-blocks     Used    Avail Capacity  Type
    /dev/ad0s1b        524160   188976   335184    36%    Interleaved

                                                -Matt


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to