Thanks for your idea. That would be useful if we had the extra space. After 
playing with a test zpool for a bit, I found that this also works, but you have 
to make the pool unavailable for a while. I'm just posting it in case it helps 
someone else.

- destroy the raidz pool
- reuse one of the disks in a new pool
- recover the raidz pool in degraded mode
- copy the files to the new pool
- destroy the original pool again
- attach the final disk to the new pool as a mirror
 
e.g.
 
# mkfile 64m a
# mkfile 64m b
# zpool create raidz temp `pwd`/a `pwd`/b
# echo test >/temp/test
# cat /temp/test
test
# zpool destroy temp
# zpool create -f temp2 `pwd`/b # note -f to force to overwrite 1/2 of old pool
# zpool import -d `pwd` temp
# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
temp                    119M    453K    119M     0%  DEGRADED   -
temp2                  59.5M   83.5K   59.4M     0%  ONLINE     -

# cat /temp/test
test
# mv /temp/* /temp
# zpool destroy temp
# zpool attach temp2 `pwd`/b `pwd`/a


I would recommend Patrick's suggestion or backing up and restoring (if those 
options are available) though. This way is riskier since there are times when 
your data is unavailable and depending on a recover to bring it back.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to