On 20/10/2010 14:48, Darren J Moffat wrote:
On 20/10/2010 14:03, Edward Ned Harvey wrote:
In a discussion a few weeks back, it was mentioned that the Best
Practices
Guide says something like "Don't put more than ___ disks into a single
vdev." At first, I challenged this idea, because I see no reason why a
21-disk raidz3 would be bad. It seems like a good thing.
If you have those 21 disks spread across 3 top level vdevs each of
raidz3 with 7 disks then ZFS can will stripe across 3 vdevs rather
than than 1.
Here is an example from the Sun ZFS Storage Appliance GUI:
Each O is a score out of 5
----------------------------------------------------------------------
AVAIL PERF CAPACITY
Double parity RAID OOOO_ OOO__ OOOO_ 1.45T
Mirrored OOOO_ OOOOO O____ 808G
Single partiy RAID, narrow stripes OOO__ OOOO_ OO___ 1.18T
Striped _____ OOOOO OOOOO 1.84T
Triple mirrored OOOO_ OOOOO _____ 538G
Triple parity RAID, wide stripes OOOO_ OO___ OOOOO 1.31T
----------------------------------------------------------------------
Yes, that's all rather simplistic, isn't it?!
Does it use a sinusoidal function when plotting the O's (e.g. 1.31T
scores more than 1.45T)? ;)
Does the AVAIL score takes into account the size of the stripe, the time
taken to resilver, controller topology, etc?
The PERF score is utterly meaningless without reference to a workload
(e.g. read vs write, random vs sequential, big vs small, uniform vs
non-uniform, etc) and it's all without reference to SSDs.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss