On 05/30/2010 03:10 PM, Mattias Pantzare wrote:

> > On Sun, May 30, 2010 at 23:37, Sandon Van Ness <san...@van-ness.com> wrote:
> >   
>   
>> >> I just wanted to make sure this is normal and is expected. I fully
>> >> expected that as the file-system filled up I would see more disk space
>> >> being used than with other file-systems due to its features but what I
>> >> didn't expect was to lose out on ~500-600GB to be missing from the total
>> >> volume size right at file-system creation.
>> >>
>> >> Comparing two systems, one being JFS and one being ZFS, one being raidz2
>> >> one being raid6. Here is the differences I see:
>> >>
>> >> ZFS:
>> >> r...@opensolaris: 11:22 AM :/data# df -k /data
>> >> Filesystem            kbytes    used   avail capacity  Mounted on
>> >> data                 17024716800 258872352 16765843815     2%    /data
>> >>
>> >> JFS:
>> >> r...@sabayonx86-64: 11:22 AM :~# df -k /data2
>> >> Filesystem           1K-blocks      Used Available Use% Mounted on
>> >> /dev/sdd1            17577451416   2147912 17575303504   1% /data2
>> >>
>> >> zpool list shows the raw capacity right?
>> >>
>> >> r...@opensolaris: 11:25 AM :/data# zpool list data
>> >> NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
>> >> data   18.1T   278G  17.9T     1%  1.00x  ONLINE  -
>> >>
>> >> Ok, i would expect it to be rounded to 18.2 but that seems about right
>> >> for 20 trillion bytes (what 20x1 TB is):
>> >>
>> >> r...@sabayonx86-64: 11:23 AM :~# echo | awk '{print
>> >> 20000000000000/1024/1024/1024/1024}'
>> >> 18.1899
>> >>
>> >> Now minus two drives for parity:
>> >>
>> >> r...@sabayonx86-64: 11:23 AM :~# echo | awk '{print
>> >> 18000000000000/1024/1024/1024/1024}'
>> >> 16.3709
>> >>
>> >> Yet when running zfs list it also lists the amount of storage
>> >> significantly smaller:
>> >>
>> >> r...@opensolaris: 11:23 AM :~# zfs list data
>> >> NAME   USED  AVAIL  REFER  MOUNTPOINT
>> >> data   164K  15.9T  56.0K  /data
>> >>
>> >> I would expect this to be 16.4T.
>> >>
>> >> Taking the df -k values JFS gives me a total volume size of:
>> >>
>> >> r...@sabayonx86-64: 11:31 AM :~# echo | awk '{print
>> >> 17577451416/1024/1024/1024}'
>> >> 16.3703
>> >>
>> >> and zfs is:
>> >>
>> >> r...@sabayonx86-64: 11:31 AM :~# echo | awk '{print
>> >> 17024716800/1024/1024/1024}'
>> >> 15.8555
>> >>
>> >> So basically with JFS I see no decrease in total volume size but a huge
>> >> difference on ZFS. Is this normal/expected? Can anything be disabled to
>> >> not lose 500-600 GB of space?
>> >>     
>>     
> > This may be the answer:
> > http://www.cuddletech.com/blog/pivot/entry.php?id=1013
> >   
>   
That is definitely interesting; however, I am seeing more than 1.6% of a
descrepancy:

When using a newer df based off gnu coreutils I use -B to specify the
unit of 1 billion bytes which is 1 GB using the HD companies scale. On
the raid/jfs:
r...@sabayonx86-64: 03:14 PM :~# df -B 1000000000 /data2
Filesystem          1GB-blocks      Used Available Use% Mounted on
/dev/sdd1                18000         3     17998   1% /data2

on the ZFS

r...@opensolaris: 03:16 PM :/data# df -B 1000000000 /data
Filesystem          1GB-blocks      Used Available Use% Mounted on
data                     17434         1     17434   1% /data

Interesting enough I am seeing almost exactly double that as its 3.14%
by my calculations. Maybe this was cahnged in newer versions to have
more of a reserve? I am running b134.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to