Hi Bill,
I can't comment on why your USB device names are changing, but I have
seen BIOS upgrades do similar things to device names.
If you must run a root pool on USB sticks, then I think you would have
to boot from the LiveCD before running the BIOS upgrade. Maybe someone
can comment. On Sun systems, we recommend importing the pool before
running firmware upgrades, for example.
The root cause of your beadm activate failure is that the device naming
conventions changed in build 125 and beadm doesn't understand a mirrored
root pool because of these changes.
This is a known problem described on this list and also in the ZFS
troubleshooting wiki although I see that it refers to Live Upgrade only.
I will fix this. It might be a good idea to review this wiki before
attempting an upgrade.
The beadm workaround is to detach the second disk in the mirrored root
pool, run the beadm operation, then re-attach the second disk.
Cindy
On 12/04/09 14:57, Bill Hutchison wrote:
Hello,
I had snv_111b running for a while on a HP DL160G5. With two 16GB USB sticks
comprising the mirrored rpool for boot. And four 1TB drives comprising another
pool, pool1, for data.
So that's been working just fine for a few months. Yesterday I get it into my
mind to upgrade the OS to latest, then was snv_127. That worked, and all was
well. Also did an upgrade to the DL160G5's BIOs firmware. All was cool and
running as snv_127 just fine. Upgraded zfs from 13 to 19.
See pool status post-upgrade:
r...@arc:/# zpool status
pool: pool1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
c7t3d0 ONLINE 0 0 0
c7t4d0 ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c2t0d0s0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
errors: No known data errors
Today I went to activate the BE for the new snv_127 install that I've been manually
booting into, but "beadm activate..." will always fail, see here:
r...@arc:~# export BE_PRINT_ERR=true
r...@arc:~# beadm activate opensolaris-snv127
be_do_installgrub: installgrub failed for device c2t0d0s0.
Unable to activate opensolaris-snv127.
Unknown external error.
So I tried the installgrub manually and get this:
r...@arc:~# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c2t0d0s0
cannot open/stat device /dev/rdsk/c2t0d0s2
OK, wtf? The rpool status shows both of my USB sticks alive and well at
c2t0d0s0 and c1t0d0s0...
But when I run "format -e" I see this:
r...@arc:/# format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c7t1d0 <ATA-GB1000EAFJL-HPG1-931.51GB>
/p...@0,0/pci8086,4...@1/pci103c,3...@0/s...@1,0
1. c7t2d0 <ATA-GB1000EAFJL-HPG1-931.51GB>
/p...@0,0/pci8086,4...@1/pci103c,3...@0/s...@2,0
2. c7t3d0 <ATA-GB1000EAFJL-HPG1-931.51GB>
/p...@0,0/pci8086,4...@1/pci103c,3...@0/s...@3,0
3. c7t4d0 <ATA-GB1000EAFJL-HPG1-931.51GB>
/p...@0,0/pci8086,4...@1/pci103c,3...@0/s...@4,0
4. c8t0d0 <DEFAULT cyl 1958 alt 2 hd 255 sec 63>
/p...@0,0/pci103c,3...@1d,7/stor...@8/d...@0,0
5. c11t0d0 <Kingston-DataTraveler2.0-1.00 cyl 1958 alt 2 hd 255 sec 63>
/p...@0,0/pci103c,3...@1d,7/stor...@6/d...@0,0
Specify disk (enter its number): 4
selecting c8t0d0
[disk formatted]
/dev/dsk/c8t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
It shows my two USB sticks of the rpool being at c8t0d0 and c11t0d0... !
How is this system even working? What do I need to do to clear this up...?
Thanks for your time,
-Bill
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss