>The user DEFINITELY isn't expecting 500000000 bytes, or what you meant to say 
>500000000000 
>bytes, they're expecting 500GB.  You know, 536,870,912,000 bytes.  But even if 
>the drive mfg's 
>calculated it correctly, they wouldn't even be getting that due to filesystem 
>overhead.

I doubt there are any users left in the world that would expect that -- the 
drive manufacturers have made it clear for the past 20 years that 500 GB = 
500*10^9, not 500*2^30.  Even the OS vendors have finally (for the most part) 
started displaying GB instead of GiB.

>And again, the reason for [certified devices] is 99% about making money, not a 
>technical one.

Yes and no.  From my experience at three storage vendors, it *is* about making 
money (aren't all corporate decisions supposed to be?) but it's less about 
making money by selling overpriced drives than by not *losing* money by trying 
to support hardware that doesn't quite work.  It's a dirty little secret of the 
drive/controller/array industry (and networking, for that matter) that two 
arbitrary pieces of hardware which are supposed to conform to a standard will 
usually, mostly, work together -- but not always, and when they fail, it's very 
difficult to track down (usually impossible in a customer environment).  By 
limiting which drives, controllers, firmware revisions, etc. are supported, we 
reduce the support burden immensely and are able to ensure that we can actually 
test what a customer is using.

A few specific examples I've seen personally:

* SCSI drives with caches that would corrupt data if the mode pages were set 
wrong.
* SATA adapters which couldn't always complete commands simultaneously on 
multiple channels (leading to timeouts or I/O errors).
* SATA controllers which couldn't quite deal with timing at one edge of the 
spec ... and drives which pushed the timing to that edge under the right 
conditions.
* Drive firmware which silently dropped commands when the queue depth got too 
large.

All of these would 'mostly work', especially in desktop use (few outstanding 
commands, no changes to default parameters, no use of task control messages), 
but would fail in other environments in ways that were almost impossible to 
track down with specialized hardware.

When I was in a software-only RAID company, we did support nearly arbitrary 
hardware -- but we had a "compatible" list of what we'd tested, and for 
everything else, the users were pretty much on their own. That's OK for home 
users, but for critical data, the greatly increased risk is not worth saving a 
few thousand (or even tens of thousands) dollars.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to