2011/2/3 Basil Kurian <basilkur...@gmail.com>: > [root@beastie /etc]# zpool create nas da0 da1 > [root@beastie /etc]# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > nas 23.9G 73.5K 23.9G 0% ONLINE - > [root@beastie /etc]# zpool add nas da2 > [root@beastie /etc]# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > nas 35.8G 134K 35.8G 0% ONLINE - > > *Then I stored one big file on /nas . after that , I tried to remove newly > attached disk.* > > [root@beastie /etc]# du -sh /nas/huge_file > 464M /nas/huge_file > [root@beastie ~]# zpool remove nas da2 > cannot remove da2: only inactive hot spares or cache devices can be removed > [root@beastie ~]# zpool offline nas da2 > cannot offline da2: no valid replicas > [root@beastie ~]# zpool detach nas da2 > cannot detach da2: only applicable to mirror and replacing vdevs > > * > Though the data stored in the pool is much less that the size of individual > disks , I 'm unable to remove any of the members from the pool. How can I > do that without losing data ? > *
You can't, unless it's a mirror. What you created is essentially a RAID 0 setup. > *I have one more doubt* > > [root@beastie ~]# zpool create nas mirror ad4 ad6 mirror da0 da1 > [root@beastie ~]# zpool status > pool: nas > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > nas ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ad4 ONLINE 0 0 0 > ad6 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > da0 ONLINE 0 0 0 > da1 ONLINE 0 0 0 > > [root@beastie ~]# zpool detach nas da0 > [root@beastie ~]# zpool status > pool: nas > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > nas ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ad4 ONLINE 0 0 0 > ad6 ONLINE 0 0 0 > da1 ONLINE 0 0 0 > > errors: No known data errors > > [root@beastie ~]# zpool attach nas da0 > missing <new_device> specification > [root@beastie ~]# zpool attach nas da0 da1 > invalid vdev specification > use '-f' to override the following errors: > /dev/da1 is part of active pool 'nas' > > > *How can I reattach it to the pool ?* Each drive/partition which has been in a ZFS pool, get it's last pool name, pool UID, etc. written to the drive, this is then checked when you want to use the drive again. The warning you get is to make sure you won't overwrite data on the wrong drive. When you are sure you are trying to add the correct drive, then simply add the '-f' option, as it tells you to, and the drive will be added to the pool. > *Finally one more doubt too* > [root@beastie ~]# zpool create nas mirror ad4 ad6 mirror da0 da1 > > *can we do this in two steps. something like* > > [root@beastie ~]# zpool create nas1 mirror ad4 ad6 > [root@beastie ~]# zpool create nas2 mirror da0 da1 > [root@beastie ~]# zpool create nas nas1 nas 2 > cannot open 'nas1': no such GEOM provider > must be a full path or shorthand device name Sure, but you have to use the 'add' command to add the extra mirror then: root@Urraco:/# mkfile 100m disk1 disk2 disk3 disk4 root@Urraco:/# zpool create testpool mirror /disk1 /disk2 root@Urraco:/# zpool status testpool pool: testpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /disk1 ONLINE 0 0 0 /disk2 ONLINE 0 0 0 errors: No known data errors root@Urraco:/# zpool add testpool mirror /disk3 /disk4 root@Urraco:/# zpool status testpool pool: testpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /disk1 ONLINE 0 0 0 /disk2 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 /disk3 ONLINE 0 0 0 /disk4 ONLINE 0 0 0 errors: No known data errors -- Venlig hilsen / Kind regards Jeppe Toustrup (aka. Tenzer) _______________________________________________ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss