On Tue, 28 Jul 2009, Glen Gunselman wrote:
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zpool1 40.8T 176K 40.8T 0% ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zpool1 364K 32.1T 28.8K /zpool1
This is normal, and admittedly somewhat confusing (see CR 6308817). Even
if you had not created the additional zfs datasets, it still would have
listed 40T and 32T.
Here's an example using five 1G disks in a raidz:
-bash-3.2# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 4.97G 132K 4.97G 0% ONLINE -
-bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 98.3K 3.91G 28.8K /tank
The AVAIL column in the zpool output shows 5G, whereas it shows 4G in the
zfs list. The difference is the 1G parity. If we use raidz2, we'd expect
2G to be used for the parity, and this is borne out in a quick test using
the same disks:
-bash-3.2# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 4.97G 189K 4.97G 0% ONLINE -
-bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 105K 2.91G 32.2K /tank
Contrast that with a five-way mirror:
-bash-3.2# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 1016M 73.5K 1016M 0% ONLINE -
-bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 69K 984M 18K /tank
Now they both show the pool capacity to be around 1G.
Regards,
markm
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss