Hi Joe,

It is possible that your c0t1d0s0 disk has an existing EFI label instead
of a VTOC label?

(You can tell by using format-->disk-->partition and see
if the cylinder info is displayed. If no cylinder info, then an EFI
label.)

Relabel with a VTOC label, like this:

# format -e
select disk
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Warning: This disk has an EFI label. Changing to SMI label will
erase all current partitions.
Continue? yes

The entire process would look like this:

1. Destroy the existing pool.
2. Relabel the disk.
3. Recreate the pool.
4. Restart the LU migration.

Please let me know if this is the case. I've run so many LU
migrations recently that I can't remember if this is EFI scenario
or not.

Cindy

Joe Stone wrote:
> Hello,
> 
> I recently installed SunOS 5.11 snv_91 onto a Ultra 60 UPA/PCI with OpenBoot 
> 3.31 and two 300GB SCSI disks. The root file system is UFS on c0t0d0s0. 
> Following the steps in ZFS Admin I have attempted to convert root to ZFS 
> utilizing c0t1d0s0. However, upon "init 6" I am always presented with:
> 
> Bad magic number in disk label
> can't open disk label package
> 
> My Steps:
> 
> 1) format 1 -> Partition -> Modify -> Free hog -> All available to slice 0 -> 
> label > quit
> 2) zpool create pool c0t1d0s0
> 3) lucreate -c c0t0d0s0 -n sol11 -p pool
> 
> Result:
>   Analyzing system configuration.
>   Comparing source boot environment <c0t0d0s0> file systems with the file 
>   system(s) you specified for the new boot environment. Determining which 
>   file systems should be in the new boot environment.
>   Updating boot environment description database on all BEs.
>   Updating system configuration files.
>   The device </dev/dsk/c0t1d0s0> is not a root device for any boot 
> environment; cannot get BE ID.
>   Creating configuration for boot environment <sol11>.
>   Source boot environment is <c0t0d0s0>.
>   Creating boot environment <sol11>.
>   Creating file systems on boot environment <sol11>.
>   Creating <zfs> file system for </> in zone <global> on <pool/ROOT/sol11>.
>   Populating file systems on boot environment <sol11>.
>   Checking selection integrity.
>   Integrity check OK.
>   Populating contents of mount point </>.
>   Copying.
>   Creating shared file system mount points.
>   Creating compare databases for boot environment <sol11>.
>   Creating compare database for file system </>.
>   Updating compare databases on boot environment <sol11>.
>   Making boot environment <sol11> bootable.
>   Creating boot_archive for /.alt.tmp.b-b2.mnt
>   updating /.alt.tmp.b-b2.mnt/platform/sun4u/boot_archive
>   Population of boot environment <sol11> successful.
>   Creation of boot environment <sol11> successful.
> 
> 4) luactivate sol11
> 5) lustatus
>   
> Result:
>   Boot Environment           Is       Active Active    Can    Copy      
>   Name                       Complete Now    On Reboot Delete Status    
>   -------------------------- -------- ------ --------- ------ ----------
>   c0t0d0s0                   yes      yes    no        no     -         
>   sol11                         yes      no     yes       no     -         
> 
> 6) eeprom | grep boot-device
> 
> Result:
>   boot-device=/[EMAIL PROTECTED],4000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a 
> /[EMAIL PROTECTED],4000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a
> 
> 7) init 6
> 
> Any advice in this matter would be appreciated. Thank you.
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to