FWIW I saw the messages that Nelson posted at the start of this discussion on 
systems that booted. However, they very likely had relic zfs labels. I've had 
mysterious "corrupted pools" appear which I was only able to fix by using dd(1) 
to wipe out the old label.

I've come to the conclusion that zfs is saving information in places I don't 
know about and which may or may not get cleared by "zpool labelclear".

Once I recover from the ordeal of this past week I'll go back and conduct some 
experiments such as creating a slice, creating a pool and then modifying the 
slice and creating a new pool to see if I can sort out what is happening.

I am very sorry that some of the developers took offense to my posts, but I am 
very pleased that there is more engagement by the users in testing. "It is 
meet, right and our bounden duty..."

Reg


     On Saturday, April 24, 2021, 05:16:59 PM CDT, Toomas Soome via 
openindiana-discuss <openindiana-discuss@openindiana.org> wrote:  
 
 

> On 25. Apr 2021, at 00:53, Nelson H. F. Beebe <be...@math.utah.edu> wrote:
> 
> Toomas Soome <tso...@me.com <mailto:tso...@me.com>> responds today:
> 
>>> ...
>>>> On 24. Apr 2021, at 23:56, Nelson H. F. Beebe <be...@math.utah.edu> wrote:
>>>> 
>>>> Thanks for the additional suggestions to get the CentOS-7 based
>>>> OpenIndiana to boot.  Here is what I get:
>>>> 
>>>>      boot: status
>>>>      disk device:
>>>>          disk0:  BIOS driver C (167772160 X 512)
>>>>            disk0s1: Solaris 2            79GB
>>>>              disk0s1a: root              79GB
>>>>              disk0s1i: root            8032KB
>>> 
>>> Why there are two root slices? it should not disturb us but still weird. 
>>> anyhow, can you mail
>>> me full partition table, format -> verify or partition -> print.ole disk
>>> 
>>> Since this is VM and no dual-boot, I recommend to only do whole disk setup 
>>> (that is, GPT
>>> automatically prepared). But for now, I wonder how your current slices are 
>>> defined:)
>>> 
>>> ...
> 
> I booted the failing VM from the CD-ROM, ran "ssh-keygen -A", edited
> /etc/ssh/sshd_config to change PermitRootLogin from no to yes, then
> ran "/usr/lib/ssh/sshd &".  That let me login remotely from a terminal
> window from which I can do cut-and-paste, and I could then do
> 
>     # zpool import -R /mnt rpoot
> 
>     # format
>     Searching for disks...done
> 
>     AVAILABLE DISK SELECTIONS:
>           0. c4t0d0 <QEMU-HARDDISK-1.5.3 cyl 10440 alt 2 hd 255 sec 63>
>           /pci@0,0/pci1af4,1100@6/disk@0,0
>     Specify disk (enter its number): 0
> 
>     electing c4t0d0
>     [disk formatted]
>     /dev/dsk/c4t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
> 
>     FORMAT MENU:
>         disk      - select a disk
>         type      - select (define) a disk type
>         partition  - select (define) a partition table
>         current    - describe the current disk
>         format    - format and analyze the disk
>         fdisk      - run the fdisk program
>         repair    - repair a defective sector
>         label      - write label to the disk
>         analyze    - surface analysis
>         defect    - defect list management
>         backup    - search for backup labels
>         verify    - read and display labels
>         save      - save new disk/partition definitions
>         inquiry    - show vendor, product and revision
>         volname    - set 8-character volume name
>         !<cmd>    - execute <cmd>, then return
>         quit
>     format> verify
>     Warning: Primary label on disk appears to be different from
>     current label.
> 
>     Warning: Check the current partitioning and 'label' the disk or use the
>         'backup' command.
> 
>     Primary label contents:
> 
>     Volume name = <        >
>     ascii name  = <DEFAULT cyl 10440 alt 2 hd 255 sec 63>
>     pcyl        = 10442
>     ncyl        = 10440
>     acyl        =    2
>     bcyl        =    0
>     nhead      =  255
>     nsect      =  63
>     Part      Tag    Flag    Cylinders        Size            Blocks
>       0      root    wm      1 - 10439      79.97GB    (10439/0/0) 167702535
>       1 unassigned    wm      0                0        (0/0/0)            0
>       2    backup    wu      0 - 10439      79.97GB    (10440/0/0) 167718600
>       3 unassigned    wm      0                0        (0/0/0)            0
>       4 unassigned    wm      0                0        (0/0/0)            0
>       5 unassigned    wm      0                0        (0/0/0)            0
>       6 unassigned    wm      0                0        (0/0/0)            0
>       7 unassigned    wm      0                0        (0/0/0)            0
>       8      boot    wu      0 -    0        7.84MB    (1/0/0)        16065
>       9 unassigned    wm      0                0        (0/0/0)            0

*this* label does make sense. That warning above, what is it about, what does 
partition -> print tell?

rgds,
toomas


> 
> I have to leave soon for the weekend, so likely cannot respond before Monday.
> 
> -------------------------------------------------------------------------------
> - Nelson H. F. Beebe                    Tel: +1 801 581 5254                  
> -
> - University of Utah                    FAX: +1 801 581 4148                  
> -
> - Department of Mathematics, 110 LCB    Internet e-mail: be...@math.utah.edu 
> <mailto:be...@math.utah.edu>  -
> - 155 S 1400 E RM 233                      be...@acm.org 
> <mailto:be...@acm.org>  be...@computer.org <mailto:be...@computer.org> -
> - Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ 
> <http://www.math.utah.edu/~beebe/> -
> -------------------------------------------------------------------------------

_______________________________________________
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
_______________________________________________
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to