Yes, agreed.
However, for enterprises with risk management as a key factor building into their decision making processes -- what if the integrity risk is reflected on Joe Tucci's personal network data? OMG, big impact to the SLA when the SLA is critical... [right, Tim?] ;-) -z ----- Original Message ----- From: "Bob Friesenhahn" <bfrie...@simple.dallas.tx.us> To: "JZ" <j...@excelsioritsolutions.com> Cc: <zfs-discuss@opensolaris.org> Sent: Friday, January 02, 2009 8:21 PM Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity? > On Fri, 2 Jan 2009, JZ wrote: >> >> I have not done a cost study on ZFS towards the 9999999s, but I guess we >> can >> do better with more system and I/O based assurance over just RAID >> checksum, >> so customers can get to more 99998888s with less redundant hardware and >> software feature enablement fees. > > Even with a fairly trival ZFS setup using hot-swap drive bays, the primary > factor impacting "availability" are non-disk related factors such as > motherboard, interface cards, and operating system bugs. Unless you step > up to an exotic fault-tolerant system ($$$), an entry-level server will > offer as much availability as a mid-range server, and many "enterprise" > servers. In fact, the simple entry-level server may offer more > availability due to being simpler. The charts on Richard Elling's blog > make that pretty obvious. > > Is is best not to confuse "data integrity" with "availability". > > Bob > ====================================== > Bob Friesenhahn > bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss