Can you post zpool status ? 
Are your drives all the same size ?
-r

Le 30 mai 2010 à 23:37, Sandon Van Ness a écrit :

> I just wanted to make sure this is normal and is expected. I fully
> expected that as the file-system filled up I would see more disk space
> being used than with other file-systems due to its features but what I
> didn't expect was to lose out on ~500-600GB to be missing from the total
> volume size right at file-system creation.
> 
> Comparing two systems, one being JFS and one being ZFS, one being raidz2
> one being raid6. Here is the differences I see:
> 
> ZFS:
> r...@opensolaris: 11:22 AM :/data# df -k /data
> Filesystem            kbytes    used   avail capacity  Mounted on
> data                 17024716800 258872352 16765843815     2%    /data
> 
> JFS:
> r...@sabayonx86-64: 11:22 AM :~# df -k /data2
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sdd1            17577451416   2147912 17575303504   1% /data2
> 
> zpool list shows the raw capacity right?
> 
> r...@opensolaris: 11:25 AM :/data# zpool list data
> NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> data   18.1T   278G  17.9T     1%  1.00x  ONLINE  -
> 
> Ok, i would expect it to be rounded to 18.2 but that seems about right
> for 20 trillion bytes (what 20x1 TB is):
> 
> r...@sabayonx86-64: 11:23 AM :~# echo | awk '{print
> 20000000000000/1024/1024/1024/1024}'
> 18.1899
> 
> Now minus two drives for parity:
> 
> r...@sabayonx86-64: 11:23 AM :~# echo | awk '{print
> 18000000000000/1024/1024/1024/1024}'
> 16.3709
> 
> Yet when running zfs list it also lists the amount of storage
> significantly smaller:
> 
> r...@opensolaris: 11:23 AM :~# zfs list data
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> data   164K  15.9T  56.0K  /data
> 
> I would expect this to be 16.4T.
> 
> Taking the df -k values JFS gives me a total volume size of:
> 
> r...@sabayonx86-64: 11:31 AM :~# echo | awk '{print
> 17577451416/1024/1024/1024}'
> 16.3703
> 
> and zfs is:
> 
> r...@sabayonx86-64: 11:31 AM :~# echo | awk '{print
> 17024716800/1024/1024/1024}'
> 15.8555
> 
> So basically with JFS I see no decrease in total volume size but a huge
> difference on ZFS. Is this normal/expected? Can anything be disabled to
> not lose 500-600 GB of space?
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to