Hi Frank,
The reason that ZFS let you create "rpool" with a EFI label is at this
point, it doesn't know that this is a root pool. Its just a pool named
"rpool." The best solution is for us to provide a bootable EFI label.
I see an old bug that says if you already have a pool with the same name
imported, you see the zpool import error message that you provided. I'm
running build 114 not 103 and I can't reproduce this. In this scenario,
I see the correct error message, which is this:
# zpool create rpool c1t4d0s0
# zpool export rpool
# zpool import rpool
cannot import 'rpool': more than one matching pool
import by numeric ID instead
If this pool becomes your active root pool, obviously, you would not be
able to export it. If your root pool was active, any export attempt
should fail like this:
# zpool export rpool
cannot unmount '/': Device busy
#
Cindy
Frank Middleton wrote:
On 06/03/09 09:10 PM, Aurélien Larcher wrote:
PS: for the record I roughly followed the steps of this blog entry =>
http://blogs.sun.com/edp/entry/moving_from_nevada_and_live
Thanks for posting this link! Building pkg with gdb was an
interesting exercise, but it worked, with the additional step of
making the packages and pkgadding them. Curious as to why pkg
isn't available as a pkgadd package. Is there any reason why
someone shouldn't make them available for download? It would
make it much less painful for those of us who are OBP version
deprived - but maybe that's the point :-)
During the install cycle, ran into this annoyance (doubtless this
is documented somewhere):
# zpool create rpool c2t2d0
creates a good rpool that can be exported and imported. But it
seems to create an EFI label, and, as documented, attempting to boot
results in a bad magic number error. Why does zpool silently create
an apparently useless disk configuration for a root pool? Anyway,
it was a good opportunity to test zfs send/recv of a root pool (it
worked like a charm).
Using format -e to relable the disk so that slice 0 and slice 2
both have the whole disk resulted in this odd problem:
# zpool create -f rpool c2t2d0s0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 18.6G 73.5K 18.6G 0% ONLINE -
space 1.36T 294G 1.07T 21% ONLINE -
# zpool export rpool
# zpool import rpool
cannot import 'rpool': no such pool available
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
space 1.36T 294G 1.07T 21% ONLINE -
# zdb -l /dev/dsk/c2t2d0s0
lists 3 perfectly good looking labels.
Format says:
...
selecting c2t2d0
[disk formatted]
/dev/dsk/c2t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
/dev/dsk/c2t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M).
However this disk boots ZFS OpenSolaris just fine and this inability to
import an exported pool isn't a problem. Just wondering if any ZFS guru
had a comment about it. (This is with snv103 on SPARC). FWIW this is
an old ide drive connected to a sas controller via a sata/pata adapter...
Cheers -- Frank
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
s
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss