On Jan 29, 2007, at 14:17, Jeffery Malloch wrote:
Hi Guys,
SO...
From what I can tell from this thread ZFS if VERY fussy about
managing writes,reads and failures. It wants to be bit perfect.
So if you use the hardware that comes with a given solution (in my
case an Engenio 6994) to manage failures you risk a) bad writes
that don't get picked up due to corruption from write cache to
disk b) failures due to data changes that ZFS is unaware of that
the hardware imposes when it tries to fix itself.
So now I have a $70K+ lump that's useless for what it was designed
for. I should have spent $20K on a JBOD. But since I didn't do
that, it sounds like a traditional model works best (ie. UFS et al)
for the type of hardware I have. No sense paying for something and
not using it. And by using ZFS just as a method for ease of file
system growth and management I risk much more corruption.
The other thing I haven't heard is why NOT to use ZFS. Or people
who don't like it for some reason or another.
Comments?
I put together this chart a while back .. i should probably update it
for RAID6 and RAIDZ2
# ZFS ARRAY HW CAPACITY COMMENTS
-- --- -------- -------- --------
1 R0 R1 N/2 hw mirror - no zfs healing
2 R0 R5 N-1 hw R5 - no zfs healing
3 R1 2 x R0 N/2 flexible, redundant, good perf
4 R1 2 x R5 (N/2)-1 flexible, more redundant,
decent perf
5 R1 1 x R5 (N-1)/2 parity and mirror on same
drives (XXX)
6 RZ R0 N-1 standard RAID-Z no mirroring
7 RZ R1 (tray) (N/2)-1 RAIDZ+1
8 RZ R1 (drives) (N/2)-1 RAID1+Z (highest redundancy)
9 RZ 3 x R5 N-4 triple parity calculations (XXX)
10 RZ 1 x R5 N-2 double parity calculations (XXX)
(note: I included the cases where you have multiple arrays with a
single lun per vdisk (say) and where you only have a single array
split into multiple LUNs.)
The way I see it, you're better off picking either controller parity
or zfs parity .. there's no sense in computing parity multiple times
unless you have cycles to spare and don't mind the performance hit ..
so the questions you should really answer before you choose the
hardware is what level of redundancy to capacity balance do you want?
and whether or not you want to compute RAID in ZFS host memory or out
on a dedicated blackbox controller? I would say something about
double caching too, but I think that's moot since you'll always cache
in the ARC if you use ZFS the way it's currently written.
Other feasible filesystem options for Solaris - UFS, QFS, or vxfs
with SVM or VxVM for volume mgmt if you're so inclined .. all depends
on your budget and application. There's currently tradeoffs in each
one, and contrary to some opinions, the death of any of these has
been grossly exaggerated.
---
.je
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss