Jim Mauro wrote:
(I'm probably not the best person to answer this, but that has never stopped me
before, and I need to give Richard Elling a little more time to get the Goats,
Cows
and Horses fed, sip his morning coffee, and offer a proper response...)
chores are done, wading through the morning e-mail...
Would it benefit us to have the disk be setup as a raidz along with
the hardware raid 5 that is already setup too?
Way back when, we called such configurations "plaiding", which described a
host-based RAID configuration
that criss-crossed hardware RAID LUNs. In doing such things, we had potentially
better data availability
with a configuration that could survive more failure modes. Alternatively, we
used the hardware RAID
for the availability configuration (hardware RAID 5), and used host-based RAID
to stripe across hardware
RAID5 LUNs for performance. Seemed to work pretty well.
Yep, there are various ways to do this and, in general, the more copies
of the data you have, the better reliability you have. Space is also
fairly easy to calculate. Performance can be tricky, and you may need to
benchmark with your workload to see which is better, due to the difficulty
in modeling such systems.
In theory, a raidz pool spread across some number of underlying hardware raid 5
LUNs would
offer protection against more failure mode, such as the loss of an entire raid5
LUN. So from
a failure protection/data availability point of view, it offers some benefit.
Now, as to whether or not
you experience a real, measurable benefit over time is hard to say. Each
additional level of protection/redundancy
has a diminishing return, often times at a dramatic incremental cost (e.g. getting from "four
nines" to "five nines").
If money was no issue, I'm sure we could come up with an awesome solution :-)
Or with this double raid slow our performance with both a software and
hardware raid setup?
You will certainly pay a performance - using raidz across the raid5 luns will
reduce deliverable IOPS
from the raid 5 luns. Whether or not the performance trade-off is worth the RAS
gain varies based on
your RAS and data availability requirements.
Fast, inexpensive, reliable: pick two.
Or would raidz setup be better than the hardware raid5 setup?
Assuming a robust raid5 implementation with battery-backed nvram (protect against the
"write hole" and
partial stripe writes), I think a raidz zpool covers more of the datapath then
a hardware raid 5 LUN, but
I'll wait for Richard to elaborate here (or tell me I'm wrong).
In general, you want the data protection in the application, or as close to
the application as you can get. Since programmers tend to be lazy (Gosling
said it, not me! :-) most rely on the file system and underlying constructs
to ensure data protection. So, having ZFS manage the data protection will
always be better than having some box at the other end of a wire managing
the protection.
Also if we do set the disks as a raidz would it benefit use more if
we specified each disks in the raidz or create them as Luns then
specify the setup in raidz.
Isn't' this the same question as the first question? I'm not sure what
you're asking here...
The questions you're asking are good ones, and date back to the decades old
struggle
around configuration tradeoffs for performance / availability / cost.
My knee-jerk reaction is that one level of RAID, like either hardware raid5 ZFS
raidz is sufficient
for availability, and keeps things relatively simple (and simple also improves
RAS). The advantage
host-based RAID has always had of hardware RAID is the ability to create
software LUNs
(like a raidz1 or raidz2 zpool) across physical disk controllers, which may
also cross SAN
switches, etc. So, twas me, I'd go with non-hardware RAID5 devices from the
storage frame,
and create raidz1 or raidz2 zpools across controllers.
This is reasonable.
But, that's me...
:^)
/jim
The important thing is to protect your data. You have lots of options here,
so we'd need to know more precisely what the other requirements are before
we could give better advice.
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss