On Sun, Jan 18, 2009 at 3:39 PM, Richard Elling <richard.ell...@sun.com>wrote:

> Tim wrote:
> It is naive to think that different storage array vendors
> would care about people trying to use another array vendors
> disks in their arrays. In fact, you should get a flat,

impersonal, "not supported" response.
>

But we aren't talking about me trying to stick disks into Sun's arrays.
We're talking about how this open source, supposed all-in-one volume manager
and filesystem handles new disks.  You know, the one that was supposed to
make all of our lives infinitely easier, and simplify managing lots, and
lots of disks.  Whether they be inside of a official Sun array or just a
server running Solaris.


> What vendors can do, is make sure that if you get a disk
> which is supported in a platform and replace it with another
> disk which is also supported, and the same size, then it will
> just work. In order for this method to succeed, a least,
> common size is used.


The ONLY reason vendors put special labels or firmware on disks is to force
you to buy them direct.  Let's not pretend there's something magical about
an "HDS" 1TB drive or a "Sun" 1TB drive.  They're rolling off the same line
as everyone else's.  The way they ensure the disk works is by short stroking
them from the start...

It's *naive* to claim it's any sort of technical limitation.



> Vendors can change the default label, which is how it is
> implemented.  For example, if we source XYZ-GByte disks
> from two different vendors intended for the same platform,
> then we will ensure that the number of available sectors
> is the same, otherwise the FRU costs would be very high.
> No conspiracy here... just good planning.
>

The number of blocks on the disks won't be the same.  Which is why they're
right-sized per above.  Do I really need to start pulling disks from my Sun
systems to prove this point?  Sun does not require exact block counts any
more than HDS, EMC, or NetApp.  So for the life of the server, I can call in
and get the exact same part that broke in the box from Sun, because they've
got contracts with the drive mfg's.  What happens when I'm out of the
supported life of the system?  Oh, I just buy a new one?  Because having my
volume manager us a bit of intelligence and short stroke the disk like I
would expect from the start is a *bad idea*.

The sad part about all of this is that the $15 promise raid controller in my
desktop short-strokes by default and you're telling me zfs can't, or won't.


>
> There is no fuzzy math.  Disk vendors size by base 10.
> They explicitly state this in their product documentation,
> as business law would expect.
> http://en.wikipedia.org/wiki/Mebibyte
>  -- richard
>

If it's not fuzzy math, drive mfg's wouldn't lose in court over the false
advertising, would they?
http://apcmag.com/seagate_settles_class_action_cash_back_over_misleading_hard_drive_capacities.htm



At the end of the day, this back and forth changes nothing though.  The
default behavior for zfs importing a new disk should be right-sizing a
fairly conservative amount if you're (you as in Sun, not you as in Richard)
going to continue to market it as you have in the past.  It most definitely
does not eliminate the same old pains of managing disks with Solaris if I
have to start messing with labels and slices again.  The whole point of
merging a volume manager/filesystem/etc is to take away that pain.  That is
not even remotely manageable over the long term.

--Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to