I won't comment on the SVM bits because I haven't used it in many years.
For the ZFS bits you just need to "detach" it from the zpool, then "attach"
after you replace the drive.
 -- richard

Matt Cohen wrote:
> Hi.  We have a hard drive failing in one of our production servers.
>
> The server has two drives, mirrored.  It is split between UFS with SVM, and 
> ZFS.
>
> Both drives are setup as follows.  The drives are c0t0d0 and c0t1d0.  c0t1d0 
> is the failing drive.
>
> slice 0 - 3.00GB UFS  (root partition)
> slice 1 - 1.00GB swap
> slice 3 - 4.00GB UFS  (var partition)
> slice 4 - 60GB ZFS  (mirrored slice in our zfs pool)
> slice 6 - 54MB metadb
> slice 7 - 54MB metadb
>
> I think I have the plan to replace the harddrive without interrupting either 
> the SVM mirrors on slices 0,1,3 or the ZFS pool which is mirrored on slice 4. 
>  I am hoping someone can take a quick look and let me know if I missed 
> anything:
>
> 1)  Detach the SVM mirrors on the failing drive
> ===========================================
>     metadetach -f d0 d20
>     metaclear d20
>     metadetach -f d1 d21
>     metaclear d21
>     metadetach -f d3 d23
>     metaclear d23
>
> 2)  Remove the metadb's from the failing drive:
> ===========================================
>     metadb -f -d c0t1d0s6
>     metadb -f -d c0t1d0s7
>
> 3)  Offline the ZFS mirror slice
> ===========================================
>     zpool offline <poolname> c0t1d0s0
>
> 4)  At this point it should be safe to remove the drive.  All SVM mirrors are 
> detached, the metadb's on the failed drive are deleted, and the ZFS slice is 
> offline.
>
> 5)  Insert and partition the new drive so it's partitions are the same as the 
> working drive.
>
> 6)  Create the SVM partitions and attach them
> ===========================================
>     metainit d20 1 1 c0t1d0s0
>     metattach d0 d20
>     metainit d21 1 1 c0t1d0s1
>     metattach d1 d21
>     metainit d23 1 1 c0t1d0s3
>     metattach d3 d23
>
> 7)  Add the metadb's back to the new drive
> ===========================================
>     metadb -a -f -c2 c0t1d0s6 c0t1d0s7
>
> 8)  Add the ZFS slice back to the zfs pool as part of the mirrored pool
> ===========================================
>     zpool replace hrlpool c0t1d0s4
>     zpool online c0t1d0s4
>
> DONE
>
> The drive should be functioning at this point.
>
> Does this look correct?  Have I missed anything obvious?
>
> I know this isn't totally ZFS related, but I wasn't sure where to put it 
> since it has both SVM and ZFS mirrored slices.
>
> Thanks in advance for any input.
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to