MC wrote: > The situation: a three 500gb disk raidz array. One disk breaks and you > replace it with a new one. But the new 500gb disk is slightly smaller > than the smallest disk in the array.
This is quite a problem for RAID arrays, too. It is why vendors use custom labels for disks. When you have multiple disk vendors, or the disk vendors change designs, you can end up with slightly different sized disks. So you tend to use a least common denominator for your custom label. > I presume the disk would not be accepted into the array because the zpool > replace entry on the zpool man page says "The size of new_device must be > greater than or equal to the minimum size of all the devices in a mirror or > raidz configuration."[1] yes > I had expected (hoped) that a raidz array with sufficient free space would > downsize itself to accommodate the smaller replaced disk. But I've never > seen that function mentioned anywhere :o) this is the infamous "shrink vdev" RFE. > So I figure the only way to build smaller-than-max-disk-size functionality > into a raidz array is to make a slice on each disk that is slightly smaller > than the max disk size, and then build the array out of those slices. Am I > correct here? This is the technique vendors use for RAID arrays. > If so, is there a downside to using slice(s) instead of whole disks? The > zpool > manual says "ZFS can use individual slices or partitions, though the > recommended > mode of operation is to use whole disks." ["Virtual Devices (vdevs)", 1] The recommended use of whole disks is for drives with volatile write caches where ZFS will enable the cache if it owns the whole disk. There may be an RFE lurking here, but it might be tricky to correctly implement to protect against future data corruptions by non-ZFS use. -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss