So this will actually work?

On Sat, Mar 22, 2008 at 5:58 PM, Vahid Moghaddasi <[EMAIL PROTECTED]> wrote:
> Hi All,
>
>  This is a step-by-step procedure to upgrade Solaris 10 06/06 (u2) to Solaris 
> 10 08/07 (u4).
>  Before any attempt to upgrade you will need to install 'at least' the 
> Recommended patches dated March 2008.
>  - The Kernel level of the u2 system must be 120011-14.
>  - The zones will have to be moved to ufs (done by LU).
>  - You may need to halt the zone to apply the above patches, expect downtime.
>  - In this document we use 'current' for the current BE and 'stage' for the 
> new BE.
>  - We assume that the zones are in /zones directory on a zfs file system.
>
>  1- Create a ufs file system which can hold critical file systems plus the 
> zones in this case c1t0d0s5.
>  2- Create the new BE (stage):
>      # lucreate -c current -m /:/dev/dsk/c1t0d0s5:ufs -n stage
>         NOTE: if '-c' is omitted, the default BE name will be the device name.
>  3- Upgrade the new BE (stage):
>       # luupgrade -u -n stage -s /net/depot/sol10sparc
>  4- Activate the new BE to test:
>       # luactivate stage
>       # init 6
>  NOTE:
>  The zfs file system in zonepath might not mount and place the 
> filesystem/local service in maintenance mode and complain about the zonepath 
> directory not being empty. In that case,  you will have to move the parent of 
> the zonepath to another location, e.g. mv /zones /zones.u4 and run zfs mount 
> -a and see your original /zones file system. There will be other directories 
> related to zone upgrade in /zones (zone1-current) but don't mind those.
>
>  5- Move the zones to zfs on the 'stage' BE.
>         # zoneadm -z zone1 halt
>         # zoneadm -z zone1 move /zones/zone1
>
>  6- Copy back the new BE (stage) to the original boot device (where current 
> BE was)
>  NOTE: This is an additional step to put back the 'stage' BE to the 'current' 
> BE slices. This step should be done after the upgrade is successful as it 
> will destroy the 'current' BE.
>  We need to do this step manually so Live Upgrade will not attempt to move 
> the zones to 'current' BE, we would want to keep the zones where they are on 
> zfs.
>         # newfs -m 1 /dev/dsk/c1t0d0s0 # this is root
>         # newfs -m 1 /dev/dsk/c1t0d0s3 # this is var
>         # mkdir /a
>         # mount /dev/dsk/c1t0d0s0 /a
>         # mkdir /a/var
>         # mount /dev/dsk/c1t0d0s3 /a/var
>         # ufsdump 0f - /dev/rdsk/c1t0d0s5 | (cd /a; ufsrestore xf -)
>         # installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk 
> /dev/rdsk/c1t0d0s0
>         # vi /a/etc/vfstab # to modify the boot and target devices and 
> replace s5 with s0 and s3 (var).
>         # eeprom boot-device='disk0 /[EMAIL PROTECTED],600000/[EMAIL 
> PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
> PROTECTED]/[EMAIL PROTECTED],0:f' # in case disk0 does not boot.
>         # init 6
>
>  You should be done at this point.
>
>
>  This message posted from opensolaris.org
>  _______________________________________________
>  zfs-discuss mailing list
>  zfs-discuss@opensolaris.org
>  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to