On Sun, Jan 18, 2009 at 12:19 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Sun, 18 Jan 2009, Will Murnane wrote:
> > That's easy to say, but what if there were no larger alternative?
> > Suppose I have a pool composed of those 1.5TB Seagate disks, and
> > Hitachi puts out some of the "same" capacity that are actually
> > slightly smaller.  A drive fails in my array, I buy a Hitachi disk to
> > replace it, and it doesn't work.  If I can't get a large enough drive
> > to replace the missing disk with, it'd be a shame to have to destroy
> > and recreate the pool on smaller media.
>
> What do you propose that OpenSolaris should do about this?  Should
> OpenSolaris use some sort of a table of "common size" drives, or use
> an algorithm which determines certain discrete usage values based on
> declared drive sizes and a margin for error?  What should OpenSolaris
> of today do with the 20TB disk drives of tomorrow?  What should the
> margin for error of a 30TB disk drive be?  Is it ok to arbitrarily
> ignore 3/4TB of storage space?
>
> If the "drive" is actually a huge 20TB LUN exported from a SAN RAID
> array, how should the margin for error be handled in that case?
>
> Bob
> ======================================
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>
>
Take a look at drives on the market, figure out a percentage, and call it a
day.  If there's a significant issue with "20TB" drives of the future, issue
a bug report and a fix, just like every other issue that comes up.


--Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to