I've installed Open Solaris 2009.06 on a machine with 5 identical 1TB wd green 
drives to create a ZFS nas.  The intended install is one drive dedicated to the 
OS and the remaining 4 drives in a raidz1 configuration.  The install is 
working fine, but creating the raidz1 pool and rebooting is causing the machine 
report "Cannot find active partition" upon reboot.  Below is command line 
output from the live cd during the install and post install.  Note that I've 
tried this a few times, so you'll see tank faulted in the post install from 
previous install attempts.


>> Live CD Install


AVAILABLE DISK SELECTIONS:
      0. c7d0 <WDC WD10-  WD-WMAV5012875-0001-931.51GB>
         /p...@0,0/pci-...@1f,2/i...@0/c...@0,0
      1. c7d1 <WDC WD10-  WD-WMAV5011699-0001-931.51GB>
         /p...@0,0/pci-...@1f,2/i...@0/c...@1,0
      2. c8d0 <WDC WD10-  WD-WMAV5011404-0001-931.51GB>
         /p...@0,0/pci-...@1f,2/i...@1/c...@0,0
      3. c8d1 <DEFAULT cyl 60797 alt 2 hd 255 sec 126>
         /p...@0,0/pci-...@1f,2/i...@1/c...@1,0
      4. c9d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 126>
         /p...@0,0/pci-...@1f,5/i...@0/c...@0,0


** I Install on c8d1 because there was a previous install on this machine and 
it is currently the bootable drive.




>> Post Install

r...@nas:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
      0. c3d0 <WDC WD10-  WD-WMAV5011404-0001-931.51GB>
         /p...@0,0/pci-...@1f,2/i...@1/c...@0,0
      1. c3d1 <DEFAULT cyl 60797 alt 2 hd 255 sec 126>
         /p...@0,0/pci-...@1f,2/i...@1/c...@1,0
      2. c4d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 126>
         /p...@0,0/pci-...@1f,5/i...@0/c...@0,0
      3. c6d0 <WDC WD10-  WD-WMAV5012875-0001-931.51GB>
         /p...@0,0/pci-...@1f,2/i...@0/c...@0,0
      4. c6d1 <WDC WD10-  WD-WMAV5011699-0001-931.51GB>
         /p...@0,0/pci-...@1f,2/i...@0/c...@1,0

r...@nas:~# zpool status
 pool: rpool
 state: ONLINE
 scrub: none requested
config:

       NAME        STATE     READ WRITE CKSUM
       rpool       ONLINE       0     0     0
         c4d0s0    ONLINE       0     0     0

errors: No known data errors

 pool: tank
 state: UNAVAIL
 scrub: none requested
config:

       NAME        STATE     READ WRITE CKSUM
       tank        UNAVAIL      0     0     0  insufficient replicas
         raidz1    UNAVAIL      0     0     0  corrupted data
           c3d0    ONLINE       0     0     0
           c3d1    ONLINE       0     0     0
           c6d0    ONLINE       0     0     0
           c6d1    ONLINE       0     0     0

r...@nas:~# zpool destroy tank

r...@nas:~# zpool status
 pool: rpool
 state: ONLINE
 scrub: none requested
config:

       NAME        STATE     READ WRITE CKSUM
       rpool       ONLINE       0     0     0
         c4d0s0    ONLINE       0     0     0


** I run these commands to show the status of each drive from ZFS's perspective.

r...@nas:~# zpool create test c3d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c3d0s0 is part of exported or potentially active ZFS pool
tank. Please see zpool(1M).

r...@nas:~# zpool create test c3d1
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c3d1s0 is part of exported or potentially active ZFS pool
rpool. Please see zpool(1M).

r...@nas:~# zpool create test c6d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c6d0s0 is part of exported or potentially active ZFS pool
tank. Please see zpool(1M).


r...@nas:~# zpool create test c6d1
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c6d1s0 is part of exported or potentially active ZFS pool
tank. Please see zpool(1M).


r...@nas:~# zpool create test c4d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c4d0s0 is part of active ZFS pool rpool. Please see zpool(1M).




If I run `zpool create -f tank raidz1 c3d0 c3d1 c6d0 c6d1` it causes the OS not 
to boot saying "Cannot find active partition".  If I leave c3d1 out.. ie. 
`zpool create -f tank raidz1 c3d0 c6d0 c6d1` and reboot everything is fine.  
This makes no sense to me since c4d0 is showing up in the zpool status as the 
OS drive.  Am I missing something?  I've read through documentation but can't 
seem to find the right piece of documentation to shed light on what I'm missing.


Thanks in advance.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to