So I had an E450 running Solaris 8 with VxVM encapsulated root disk.  I 
upgraded it to Solaris 10 ZFS root using this method:

- Unencapsulate the root disk
- Remove VxVM components from the second disk
- Live Upgrade from 8 to 10 on the now-unused second disk
- Boot to the new Solaris 10 install
- Create a ZFS pool on the now-unused first disk
- Use Live Upgrade to migrate root filesystems to the ZFS pool
- Add the now-unused second disk to the ZFS pool as a mirror

Now my E450 is running Solaris 10 5/09 with ZFS root, and all the same users, 
software, and configuration that it had previously.  That is pretty slick in 
itself.  But the server itself is dog slow and more than half the disks are 
failing, and maybe I want to clone the server on new(er) hardware.

With ZFS, this should be a lot simpler than it used to be, right?  A new server 
has new hardware, new disks with different names and different sizes.  But that 
doesn't matter anymore.  There's a procedure in the ZFS manual to recover a 
corrupted server by using zfs receive to reinstall a copy of the boot 
environment into a newly created pool on the same server.  But what if I used 
zfs send to save a recursive snapshot of my root pool on the old server, booted 
my new server (with the same architecture) from the DVD in single user mode and 
created a ZFS pool on its local disks, and did zfs receive to install the boot 
environments there?  The filesystems don't care about the underlying disks.  
The pool hides the disk specifics.  There's no vfstab to edit.  

Off the top of my head, all I can think to have to change is the network 
interfaces.  And that change is as simple as "cd /etc ; mv hostname.hme0 
hostname.qfe0" or whatever.  Is there anything else I'm not thinking of?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to