I hope this thread catches someone's attention. I've reviewed the root pool 
recovery guide as posted.  It presupposes a certain level of network support, 
for backup and restore, that many opensolaris users may not have.  

For an administrator who is working in the context of a data center or a 
laboratory, where there are multiple systems, including some kind of network 
attached storage, it works fine.  However, my understanding was that 
opensolaris was attempting to broaden its user-base by adopting capabilities 
from the many flavors of linux, and (dare I say it?) even Windows (detailed 
CIFS integration).  

The desire to make in-roads into the alternative workstation OS segment implies 
that Sun and OpenSolaris anticipate more individual, desktop and laptop-based 
users vice the Solaris previous sole focus on the server segment.  But the 
desktop and laptop-based crowd often do not have and NFS or other storage 
server on their network.  Many are simply home-users who want an alternative to 
Microsoft or who hope to enhance their education in commercial versions of UNIX 
via home-implementation.  While they likely won't have network storage 
available, most such users do have resources such as additional hard drives, 
USB drives, eSATA drives, or USB solid state devices available.

For their use I threw together a root pool recovery procedure that uses an 
additional disk attached to the machine requiring backup/recovery.  In 
developing this procedure the additional disk was attached to a free IDE/SATA 
port on the motherboard.  However, I believe it will work equally well for an 
attached USB-disk or solid state memory device.

PROCEDURE FOLLOWS:
__________________________________________________________

This instruction assumes that in the system bios is capable of choosing a 
specific boot device and that the system is X86/64 rather than SPARC.
It also assumes that there are at least three disk devices available in the 
system: 
1) A rootpool from which to take a backup. In this instruction this disk is 
c0d1s0.
2) An alternate disk which should become the new rootpool. In this instruction 
this disk is c0d0s0. And to choose this method rather than mirroring the 
original rootpool implies the the new rootpool disk may be smaller than the old.
3) A 'storage' zfs pool that is not the rootpool, nor the new rootpool, but can 
serve as a storage cache for the system backup in place of a shared network 
drive, NFS or other.

These tasks were accomplished all from the first CD in a Solaris 10 
installation set because all efforts to send the recursive snapshot of the 
rootpool while the system was in operation resulted in the system hanging.

Your mileage may vary. However, I believe these instructions to be a safe 
backup method for average users who work primarily with a single system, and 
who may not have extensive network storage or other system support.

These instructions were developed on Solaris 10 U6, October 2008, for an 
X86-X64 system using an AMD64 processor.
_____________________________________________________________________________________
1. While booted interactively on your system, as root, perform the following:

        -Create a zfs storage pool to contain your zfs snapshots (if you don't 
already have one).
        EXAMPLE:
        # zfs create storage/snaps
        
        -Create a recursive zfs snapshot of your rootpool using the command, 
"zfs snapshot -r <rootpoolname>@<today'sdate:DDMMYYYY>
        EXAMPLE:
        # zfs snapshot -r rootp...@16012009
        
        -Shutdown the system
        EXAMPLE:
        # init 0

2. Insert or connect the new rootpool disk.
3. Insert the Solaris bootable CD or DVD.
4. boot cdrom
5. Press 3 when prompted for interactive install.
6. Press F_2 when prompted.
7. Press 'Enter' when prompted.
8. When interactive shell has started place mouse cursor in window as indicated 
and press'Enter'.
9. When prompted, place mouse cursor in window, per screen instructions, press 
'0', then press 'Enter'.
10. When interactive install console window appears, minimize it.
11. Right click on the desktop and select 'Programs' and the sub-menu 
'Terminal...'

All further instructions should be completed in the terminal.
_____________________________________________________________________________________

# zpool import
  pool: storage
    id: 2698595696121940384
 state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

        storage     ONLINE
          mirror    ONLINE
            c2d0    ONLINE
            c3d0    ONLINE

  pool: rootpool
    id: 8060131098876360047
 state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

        rootpool    ONLINE
          c0d1s0    ONLINE

# zpool import -f rootpool
cannot mount '/rootpool': failed to create mountpoint
# zpool import -f storage
cannot mount '/storage': failed to create mountpoint
cannot mount '/storage/snaps': failed to create mountpoint
# zfs set mountpoint=/mnt storage/snaps
# zfs mount storage/snaps
# ls /mnt
# ls -la /mnt
total 516
drwxr-xr-x   2 root     root           4 Jan 16  2009 .
drwxr-xr-x  19 root     root         512 Oct 27 10:04 ..
# zfs send -v rootp...@16012009 > /mnt/rootpool.16012009
# zfs send -Rv rootpool/ROOT/s10x_u6wos_...@16012009 > /mnt/s10x_u6wos_07b.16
012009
sending from @ to rootpool/ROOT/s10x_u6wos_...@16012009
# zpool export rootpool
# zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/
zpool.cache rootpool c0d0s0
# ls /mnt
rootpool.16012009        s10x_u6wos_07b.16012009
# cat /mnt/rootpool.16012009 | zfs receive -Fd rootpool
# cat /mnt/s10x_u6wos_07b.16012009 | zfs receive -Fd rootpool
# zfs list
NAME                                                USED  AVAIL  REFER  
MOUNTPOINT
rootpool                                           4.92G  31.5G  37.5K  legacy
rootp...@16012009                                      0      -  37.5K  -
rootpool/ROOT                                      4.92G  31.5G    18K  legacy
rootpool/ROOT/s10x_u6wos_07b                       4.92G  31.5G  4.92G  /a
rootpool/ROOT/s10x_u6wos_...@16012009              61.5K      -  4.92G  -
storage                                            84.0G   832G    19K  /storage
storage/snaps                                      5.09G   832G  5.09G  /mnt
# zpool set bootfs=rootpool/ROOT/s10x_u6wos_07b rootpool
# zfs create -V 2G rootpool/dump
# zfs create -V 2G -b 4k rootpool/swap
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0d0s0
# zpool export rootpool
# zfs unmount storage/snaps
# zfs set mountpoint=/storage/snaps storage/snaps
# zpool export storage
# init 6
_____________________________________________________________________
END PROCEDURE

I hope someone finds this useful.

V/R

Gordon Johnson
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to