Jef Pearlman wrote:
Perhaps I'm not asking my question clearly. I've already experimented a fair amount with zfs, including creating and destroying a number of pools with and without redundancy, replacing vdevs, etc. Maybe asking by example will clarify what I'm looking for or where I've missed the boat. The key is that I want a grow-as-you-go heterogenous set of disks in my pool:

The short answer:
        zpool add -- add a top-level vdev as a dynamic stripe column
                + available space is increased

        zpool attach -- add a mirror to an existing vdev
                + only works when the new mirror is the same size or larger than
                  the existing vdev
                + available space is unchanged
                + redundancy (RAS) is increased

        zpool detach -- remove a mirror from an existing vdev
                + available space increases if removed mirror is smaller than 
vdev
                + redundancy (RAS) is decreased

        zpool replace -- functionally equivalent to attach followed by detach


Let's say I start with a 40g drive and a 60g drive. I create a non-redundant pool (which will be 100g). At some later point, I run across an unused 30g drive, which I add to the pool. Now my pool is 130g. At some point after that, the 40g drive fails, either by producing read errors or my failing to spin up at all. What happens to my pool? Can I mount and access it at all (for the data not on or striped across the 40g drive)? Can I "zfs replace" the 40g drive with another drive and have it attempt to copy as much data over as it can? Or am I just out of luck? zfs seems like a great way to use old/unutilized drives to expand capacity, but sooner or later one of those drives will fail, and if it takes out the whole pool (which it might reasonably do), then it doesn't work out in the end.

For non-redundant zpools, a device failure *may* cause the zpool to be 
unavailable.
The actual availability depends on the nature of the failure.

A more common scenario might be to add a 400 GByte drive, which you can use to
replace the older drives, or keep online for redundancy.

The zfs copies feature is a little bit harder to grok.  It is difficult to
predict how the system will be affected if you have copies=2 in your above
scenario, because it depends on how the space is allocated.  For more info,
see my notes at:
        http://blogs.sun.com/relling/entry/zfs_copies_and_data_protection

 -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to