[amplification of Joe's point below...]
Moore, Joe wrote:
Bob Friesenhahn wrote:
Your idea to stripe two disks per LUN should work. Make sure to use
raidz2 rather than plain raidz for the extra reliability. This
solution is optimized for high data throughput from one user.
Striping two disks per LUN (RAID0 on 2 disks) and then adding a ZFS form of
redundancy (either mirror or raidz[2]) would be an efficient use of space.
There would be no additional space overhead caused by running that way.
It would also reduce your per-vdev MTBF by half. In general, better
reliability at the vdev level is a good thing. For example, consider the
case where we have 6 same-sized disks. We can configure them in two
different ways using 2+1 RAID-5 sets:
configuration MTTDL[1]
--------------------------
RAID-5+0 188,297
RAID-0+5 94,149
The MTTDL[1] model does consider MTTR, which is a combination of
the logistical response time and reconstruction time. Unless you
have zero response time and reconstruction time, RAID-0+5 is not
as good as RAID-5+0.
Note, however, that if you do this, ZFS must resilver the larger LUN in the event of a
single disk failure on the backend. This means a longer time to rebuild, and a lot of
"extra" work on the other (non-failed) half of the RAID0 stripe.
ZFS resilver tends to be I/O bound in one of two ways: bandwidth on the
resilvering vdev and iops on the surviving vdevs. You might consider
this when you use "hardware" RAID vdevs.
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss