> This is simply not true. ZFS would protect against
> the same type of 
> errors seen on an individual drive as it would on a
> pool made of HW raid 
> LUN(s). It might be overkill to layer ZFS on top of a
> LUN that is 
> already protected in some way by the devices internal
> RAID code but it 
> does not "make your data susceptible to HW errors
> caused by the storage 
> subsystem's RAID algorithm, and slow down the I/O".

I disagree, and vehemently at that. I maintain that if the HW RAID is used, the 
chance of data corruption is much higher, and ZFS would have a lot more 
repairing to do than it would if it were used directly on disks. Problems with 
HW RAID algorithms have been plaguing us for at least 15 years or more. The 
venerable Sun StorEdge T3 comes to mind!

Further, while it is perfectly logical to me that doing RAID calculations twice 
is slower than doing it once, you maintain that is not the case, perhaps 
because one calculation is implemented in FW/HW?

Well, why don't you simply try it out? Once with both RAID HW and ZFS, and once 
with just ZFS directly on the disks?
RAID HW is very likely to have a slower CPU or CPUs than any modern system that 
ZFS will be running on. Even if we assume that the HW RAID's CPU is the same 
speed or faster than the CPU in the server, you still have TWICE the amount of 
work that has to be performed for every write. Once by the hardware and once by 
the software (ZFS). Caches might help some, but I fail to see how double the 
amount of work (and hidden, abstracted complexity) would be as fast or faster 
than just using ZFS directly on the disks.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to