Ian Collins wrote:
Ross wrote:
Is that accounting for ZFS overhead? I thought it was more than that
(but of course, it's great news if not) :-)
A raidz2 pool with 8 500G drives showed 2.67GB free.
Same here. The ZFS overhead appears to be much smaller than similar UFS
filesystems.
E.g. on 500GB Hitachi drives:
Total disk sectors available: 976743646 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm 256 465.75GB 976743646
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 976743647 8.00MB 976760030
This is with an EFI label, which reports almost EXACTLY the amount
expected (500GB = 465GiB).
I'm using them in a 4-disk RAID-Z, so I lose 1 disk to parity.
The info is:
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data 1.81T 1.75T 64.0G 96% ONLINE -
# zfs list data
NAME USED AVAIL REFER MOUNTPOINT
data 1.31T 26.2G 41.2G /data
# df -k /data
Filesystem kbytes used avail capacity Mounted on
data 1433069568 43178074 27479559 62% /data
Given the numbers, I would expect 3 x 465.75GB = 3 x 488374272kB =
1465122816 kB.
So, 'df' reports my RAID-Z as being 2.18% smaller than the aggregate raw
disk partition size.
If the same numbers hold up for you, with 8 x 1.5TB in a RAID-Z:
1.5TB ~ 1.364TiB
7 x 1.364TiB ~ 9.546TiB
Lose 2.2% for ZFS overhead: 9.546TiB x 0.978 ~ 9.34 TiB
That's todays math lesson!
:-)
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss