Ross wrote: > The problem is they might publish these numbers, but we really have no way of > controlling what number manufacturers will choose to use in the future. > > If for some reason future 500GB drives all turn out to be slightly smaller > than the current ones you're going to be stuck. Reserving 1-2% of space in > exchange for greater flexibility in replacing drives sounds like a good idea > to me. As others have said, RAID controllers have been doing this for long > enough that even the very basic models do it now, and I don't understand why > such simple features like this would be left out of ZFS. > >
I have added the following text to the best practices guide: * When a vdev is replaced, the size of the replacements vdev, measured by usable sectors, must be the same or greater than the vdev being replaced. This can be confusing when whole disks are used because different models of disks may provide a different number of usable sectors. For example, if a pool was created with a "500 GByte" drive and you need to replace it with another "500 GByte" drive, then you may not be able to do so if the drives are not of the same make, model, and firmware revision. Consider planning ahead and reserving some space by creating a slice which is smaller than the whole disk instead of the whole disk. > Fair enough, for high end enterprise kit where you want to squeeze every byte > out of the system (and know you'll be buying Sun drives), you might not want > this, but it would have been trivial to turn this off for kit like that. > It's certainly a lot easier to expand a pool than shrink it! > Actually, enterprise customers do not ever want to squeeze every byte, they would rather have enough margin to avoid such issues entirely. This is what I was referring to earlier in this thread wrt planning. -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss