Re: [zfs-discuss] Status of zpool remove in raidz and non-redundant stripes

2008-12-05 Thread Mike Brancato
Well, I knew it wasn't available. I meant to ask what is the status of the development of the feature? Not started, I presume. Is there no timeline? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] redundancy in non-redundant stripes

2008-12-05 Thread Mike Brancato
In theory, with 2 80GB drives, you would always have a copy somewhere else. But a single drive, no. I guess I'm thinking in the optimal situation. With multiple drives, copies are spread through the vdevs. I guess it would work better if we could define that if copies=2 or more, that at leas

[zfs-discuss] redundancy in non-redundant stripes

2008-12-05 Thread Mike Brancato
With ZFS, we can enable copies=[1,2,3] to configure how many copies of data there are. With copies of 2 or more, in theory, an entire disk can have read errors, and the zfs volume still works. The unfortunate part here is that the redundancy lies in the volume, not the pool vdev like with ra

[zfs-discuss] Status of zpool remove in raidz and non-redundant stripes

2008-12-05 Thread Mike Brancato
I've seen discussions as far back as 2006 that say development is underway to allow the addition and remove of disks in a raidz vdev to grow/shrink the group. Meaning, if a 4x100GB raidz only used 150GB of space, one could do 'zpool remove tank c0t3d0' and data residing on c0t3d0 would be migra