On 29.07.09 16:59, Mark J Musante wrote:
On Tue, 28 Jul 2009, Glen Gunselman wrote:
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zpool1 40.8T 176K 40.8T 0% ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zpool1 364K 32.1T 28.8K /zpool1
This is normal, and admittedly somewhat confusing (see CR 6308817).
Even if you had not created the additional zfs datasets, it still would
have listed 40T and 32T.
Here's an example using five 1G disks in a raidz:
-bash-3.2# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 4.97G 132K 4.97G 0% ONLINE -
-bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 98.3K 3.91G 28.8K /tank
The AVAIL column in the zpool output shows 5G, whereas it shows 4G in
the zfs list. The difference is the 1G parity. If we use raidz2, we'd
expect 2G to be used for the parity, and this is borne out in a quick
test using the same disks:
-bash-3.2# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 4.97G 189K 4.97G 0% ONLINE -
-bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 105K 2.91G 32.2K /tank
Contrast that with a five-way mirror:
-bash-3.2# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 1016M 73.5K 1016M 0% ONLINE -
-bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 69K 984M 18K /tank
Mirror case shows one more thing worth to mention - difference between
available space reported by zpool and zfs is explained by a reservation
set aside by ZFS for internal purposes - it is 32MB or 1/64 of pool
capacity whichever is bigger (32MB in this example). Same reservation
applies to RAID-Z case as well, though it is difficult to see it ;-)
victor
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss