Another issue that might be happening in this case is that the ZFS
device names have changed starting in build 125. This change impacts
luactivate and mostly likely beadm activate if you have a mirrored
root pool because the root pool mirror device becomes mirror-0 as in
Bernd's root pool and neither luactivate or beadm activate recognize
this device name. The workaround is:
1. Detach the secondary mirrored root pool device(s)
2. Run the activate operation
3. Re-attach the secondary root pool device(s)
I have attempted to describe this problem, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_and_beadm_Problem_.28Starting_in_Nevada.2C_build_125.29
I haven't been able to reproduce this scenario on my OpenSolaris laptop
because I only have one disk. If someone else can confirm that CR
6894189 impacts beadm activate, then I will update this section with a
better OpenSolaris error description and workaround.
Thanks,
Cindy
On 01/04/10 13:53, Dave Miner wrote:
On 12/25/09 12:30 PM, Bernd Schemmer wrote:
Hi
another issue with the upgrade to snv_130:
The installation worked but the new BE could not be activated:
...
Reading Existing Index ... Done
Indexing Packages ... Done
pkg: unable to activate OpenSolaris06.2009-6
A manual "beadm activate" for the new BE did also not work. But the
beadm activate for one of the older BEs worked without problems.
I could boot into the new BE by manual selecting the new BE in the GRUB
menu without problems (except the ones documented in the message above)
Doing a beadm activate manual with the environment variable PRINT_ERR
set (after booting into the new BE) I got a much better error message:
xtrn...@t61p:~$ BE_PRINT_ERR=true pfexec beadm activate
OpenSolaris06.2009-6
be_do_installgrub: installgrub failed for device c8d0s0.
Unable to activate OpenSolaris06.2009-6.
Unknown external error.
And that's a correct error message:
xtrn...@t61p:~$ zpool status
pool: rpool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas
exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c9t0d0s0 ONLINE 0 0 0
c8d0s0 OFFLINE 0 0 0
c1t0d0s0 UNAVAIL 0 0 0 cannot open
errors: No known data errors
xtrn...@t61p:~$
(c8d0s0 and c1t0d0s0 are my backup disks which I only connect one time a
week
to create a copy of the rpool)
After detaching c8d0s0 and c1t0d0s0 from the pool the beadm activate
worked fine:
r...@t61p:~# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
OpenSolaris06-2009-b121 - - 3.37G static 2009-09-02
21:55
OpenSolaris06.2009-1 - - 4.88M static 2009-04-22
21:46
OpenSolaris06.2009-2 - - 93.0K static 2009-04-30
23:50
OpenSolaris06.2009-3 - - 26.03M static 2009-05-23
14:15
OpenSolaris06.2009-4 - /a 95.83M static 2009-06-05
23:52
OpenSolaris06.2009-5 - - 2.82G static 2009-06-27
16:42
OpenSolaris06.2009-6 NR / 28.15G static 2009-12-25
13:42
OpenSolaris06.2009-6-b118 - - 2.93G static 2009-07-18
13:08
opensolaris - - 168.32M static 2009-04-22
21:27
*Conclusion*
IMHO I think
1. the error messages from beadm should be more detailed in the default
configuration
Absolutely. They'll be getting better in the coming months.
2. In this case I think a warning about the missing disk is enough -- I
don't think this should be an error.
I disagree. Your use case is an exceptional one, and having a mirrored
installation that can't boot from all sides of the mirror could be
fairly damaging.
Dave
_______________________________________________
indiana-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/indiana-discuss
_______________________________________________
indiana-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/indiana-discuss