On Sat, Apr 14, 2012 at 09:04:45AM -0400, Edward Ned Harvey wrote:
> Then, about 2 weeks later, the support rep emailed me to say they
> implemented a new feature, which could autoresize +/- some small
> percentage difference, like 1Mb difference or something like that. 

There are two elements to this:
 - the size of actual data on the disk
 - the logical block count, and the resulting LBAs of the labels
   positioned relative to the end of the disk.

The available size of the disk has always been rounded to a whole
number of metaslabs, once the front and back label space is trimmed
off. Combined with the fact that metaslab size is determined
dynamically at vdev creation time based on device size, there can
easily be some amount of unused space at the end, after the last
metaslab and before the end labels. 

It is slop in this space that allows for the small differences you
describe above, even for disks laid out in earlier zpool versions.  
A little poking with zdb and a few calculations will show you just how
much a given disk has. 

However, to make the replacement actually work, the zpool code needed
to not insist on an absoute >= number of blocks (rather to check the
more proper condition, that there was room for all the metaslabs).
There was also testing to ensure that it handled the end labels moving
inwards in absolute position, for a replacement onto slightly smaller
rather than same/larger disks. That was the change that happened at
the time.

(If you somehow had disks that fit exactly a whole number of
metaslabs, you might still have an issue, I suppose. Perhaps that's
likely if you carefully calculated LUN sizes to carve out of some
other storage, in which case you can do the same for replacements.)

--
Dan.

Attachment: pgpg1ciooKHti.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to