On Sun, Jan 18, 2009 at 18:19, Bob Friesenhahn
<bfrie...@simple.dallas.tx.us> wrote:
> What do you propose that OpenSolaris should do about this?
Take drive size, divide by 100, round down to two significant digits.
Floor to a multiple of that size.  This method wastes no more than 1%
of the disk space, and gives a reasonable (I think) number.
For example: I have a machine with a "250GB" disk that is 251000193024
bytes long.
$ python
>>> n=str(251000193024//100)
>>> int(n[:2] + "0" * (len(n)-2)) * 100
250000000000L
So treat this volume as being 250 billion bytes long, exactly.

Most drives are sold with two significant digits in the size: 320 GB,
400 GB, 640GB, 1.0 TB, etc.  I don't see this changing any time
particularly soon; unless someone starts selling a 1.25 TB drive or
something, two digits will suffice.  Even then, this formula would
give you 96% (1.2/1.25) of the disk's capacity.

Note that this method also works for small-capacity disks: suppose I
have a disk that's exactly 250 billion bytes long.  This formula will
produce 250 billion as the size it is to be treated as.  Thus,
replacing my 251 billion byte disk with a 250 billion byte one will
not be a problem.

> Is it ok to arbitrarily ignore 3/4TB of storage
> space?
If it's less than 1% of the disk space, I don't see a problem doing so.

> If the "drive" is actually a huge 20TB LUN exported from a SAN RAID array,
> how should the margin for error be handled in that case?
So make it configurable if you must.  If no partition table exists
when "zpool create" is called, make it "right-size" the disks, but if
a pre-existing EFI label is there, use it instead.  Or make a flag
that tells zpool create not to "right-size".

Will
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to