Wait, root disk's uuid is different ??

On Mon, Sep 24, 2018, 15:39 Yuval Turgeman <[email protected]> wrote:

> Bootid is there, so that's not the issue.. can you run `imgbase --debug
> check` ?
>
> On Mon, Sep 24, 2018, 15:22 KRUECKEL OLIVER <[email protected]>
> wrote:
>
>>
>> ------------------------------
>> *Von:* Yuval Turgeman <[email protected]>
>> *Gesendet:* Montag, 24. September 2018 11:29:31
>> *An:* Sandro Bonazzola
>> *Cc:* KRUECKEL OLIVER; Ryan Barry; Chen Shao; Ying Cui; users
>> *Betreff:* Re: [ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status
>> degraded
>>
>> Can you share the output from `cat /proc/cmdline` and perhaps the
>> grub.conf ?
>> Imgbased adds a bootid and perhaps it's missing for some reason
>>
>> On Mon, Sep 24, 2018, 11:59 Sandro Bonazzola <[email protected]> wrote:
>>
>>> Adding some people who may help understanding what happened and work on
>>> a solution for this.
>>>
>>> Il giorno lun 24 set 2018 alle ore 10:30 <[email protected]>
>>> ha scritto:
>>>
>>>> Identified this problem for some time (running after about 3. 4th
>>>> update always in this problem), have always helped me with a new
>>>> installtion. Now I've looked at it more closely (maybe this information
>>>> will help the knower).
>>>>
>>>> Installation runs without a problem, reboot, system runs as expected,
>>>> repeated reboot => node status: DEGRADED
>>>>
>>>> What I found is: /dev/sda1 and /dev/sda2 are missing, so it can not
>>>> mount /boot/ and /boot/efi !
>>>>
>>>> in dmesg all 3rd partitions are displayed. with parted as well, after
>>>> partprobe are /dev/sda1 and /dev/sda2 under /dev/ available, mount /boot or
>>>> mount /boot/efi does not issue an error, the partionenen however are not
>>>> mounted (df -h does not show it and umount /boot or /boot/efi says so too).
>>>>
>>>> I have the same problem with
>>>> ovirt-node-ng-image-update-4.2.7-0.1.rc1.el7.noarch.rpm
>>>>
>>>> If I undo the installation (imgbase base
>>>> --remove=ovirt-node-ng-image-update-4.2 ..... and yum remove
>>>> ovirt-node-ng-image-update-4.2 .....) and repeat the installation, I can
>>>> reproduce the behavior (install, reboot, every works with the new version,
>>>> reboot, node status: DEGRADED)
>>>>
>>>> Have this behavior on four test servers.
>>>>
>>>>
>>>> here df -h, ll /boot after the 1st reboot and the output of imgbase
>>>> layout and imgbase w
>>>>
>>>> [root@ovirt-n1 ~]# df -h
>>>> Dateisystem
>>>> Größe Benutzt Verf. Verw% Eingehängt auf
>>>> /dev/mapper/onn_ovirt--n1-ovirt--node--ng--4.2.6.1--0.20180913.0+1
>>>> 183G    3,3G  170G    2% /
>>>> devtmpfs
>>>>  95G       0   95G    0% /dev
>>>> tmpfs
>>>> 95G     16K   95G    1% /dev/shm
>>>> tmpfs
>>>> 95G     42M   95G    1% /run
>>>> tmpfs
>>>> 95G       0   95G    0% /sys/fs/cgroup
>>>> /dev/mapper/onn_ovirt--n1-var
>>>> 15G    187M   14G    2% /var
>>>> /dev/sda2
>>>>  976M    417M  492M   46% /boot
>>>> /dev/mapper/onn_ovirt--n1-tmp
>>>>  976M    3,4M  906M    1% /tmp
>>>> /dev/mapper/onn_ovirt--n1-home
>>>> 976M    2,6M  907M    1% /home
>>>> /dev/mapper/onn_ovirt--n1-var_log
>>>>  7,8G    414M  7,0G    6% /var/log
>>>> /dev/mapper/onn_ovirt--n1-var_log_audit
>>>>  2,0G     39M  1,8G    3% /var/log/audit
>>>> /dev/mapper/onn_ovirt--n1-var_crash
>>>>  9,8G     37M  9,2G    1% /var/crash
>>>> /dev/sda1
>>>>  200M    9,8M  191M    5% /boot/efi
>>>> gluster01.test.visa-ad.at:/st1
>>>> 805G     71G  734G    9%
>>>> /rhev/data-center/mnt/glusterSD/gluster01.test.visa-ad.at:_st1
>>>> glustermount:iso
>>>>  50G     20G   30G   40% /rhev/data-center/mnt/glusterSD/glustermount:iso
>>>> glustermount:export
>>>>  100G    4,8G   96G    5%
>>>> /rhev/data-center/mnt/glusterSD/glustermount:export
>>>> tmpfs
>>>> 19G       0   19G    0% /run/user/0
>>>> [root@ovirt-n1 ~]# ll /boot
>>>> insgesamt 187016
>>>> -rw-r--r--. 1 root root   140971  8. Mai 10:37
>>>> config-3.10.0-693.21.1.el7.x86_64
>>>> -rw-r--r--. 1 root root   147859 24. Sep 09:04
>>>> config-3.10.0-862.11.6.el7.x86_64
>>>> drwx------. 3 root root    16384  1. Jan 1970  efi
>>>> -rw-r--r--. 1 root root   192572  5. Nov 2016  elf-memtest86+-5.01
>>>> drwxr-xr-x. 2 root root     4096  4. Mai 18:34 extlinux
>>>> drwxr-xr-x. 2 root root     4096  4. Mai 18:16 grub
>>>> drwx------. 5 root root     4096  8. Mai 08:45 grub2
>>>> -rw-------. 1 root root 59917312  8. Mai 10:39
>>>> initramfs-3.10.0-693.21.1.el7.x86_64.img
>>>> -rw-------. 1 root root 21026491 11. Jul 12:10
>>>> initramfs-3.10.0-693.21.1.el7.x86_64kdump.img
>>>> -rw-------. 1 root root 26672143  4. Mai 18:24
>>>> initramfs-3.10.0-693.el7.x86_64.img
>>>> -rw-------. 1 root root 62740408 24. Sep 09:05
>>>> initramfs-3.10.0-862.11.6.el7.x86_64.img
>>>> -rw-r--r--. 1 root root   611296  4. Mai 18:23 initrd-plymouth.img
>>>> drwx------. 2 root root    16384  8. Mai 10:32 lost+found
>>>> -rw-r--r--. 1 root root   190896  5. Nov 2016  memtest86+-5.01
>>>> drwxr-xr-x. 2 root root     4096  8. Mai 10:39
>>>> ovirt-node-ng-4.2.3-0.20180504.0+1
>>>> drwxr-xr-x. 2 root root     4096  4. Sep 16:31
>>>> ovirt-node-ng-4.2.6-0.20180903.0+1
>>>> drwxr-xr-x. 2 root root     4096 24. Sep 09:05
>>>> ovirt-node-ng-4.2.6.1-0.20180913.0+1
>>>> -rw-r--r--. 1 root root   293361  8. Mai 10:37
>>>> symvers-3.10.0-693.21.1.el7.x86_64.gz
>>>> -rw-r--r--. 1 root root   305158 24. Sep 09:04
>>>> symvers-3.10.0-862.11.6.el7.x86_64.gz
>>>> -rw-------. 1 root root  3237433  8. Mai 10:37
>>>> System.map-3.10.0-693.21.1.el7.x86_64
>>>> -rw-------. 1 root root  3414344 24. Sep 09:04
>>>> System.map-3.10.0-862.11.6.el7.x86_64
>>>> -rw-r--r--. 1 root root   346490  3. Aug 2017  tboot.gz
>>>> -rw-r--r--. 1 root root    13145  3. Aug 2017  tboot-syms
>>>> -rwxr-xr-x. 1 root root  5917504  8. Mai 10:37
>>>> vmlinuz-3.10.0-693.21.1.el7.x86_64
>>>> -rwxr-xr-x. 1 root root  6242208 24. Sep 09:04
>>>> vmlinuz-3.10.0-862.11.6.el7.x86_64
>>>> [root@ovirt-n1 ~]# imgbase layout
>>>> ovirt-node-ng-4.2.6-0.20180903.0
>>>>  +- ovirt-node-ng-4.2.6-0.20180903.0+1
>>>> ovirt-node-ng-4.2.6.1-0.20180913.0
>>>>  +- ovirt-node-ng-4.2.6.1-0.20180913.0+1
>>>> [root@ovirt-n1 ~]# imgbase w
>>>> You are on ovirt-node-ng-4.2.6.1-0.20180913.0+1
>>>>
>>>> and here df -h, ll /boot after the 2st reboot and the output of imgbase
>>>> layout and imgbase w
>>>> [root@ovirt-n1 ~]# ll /boot
>>>> Dateisystem                                                      Größe
>>>> Benutzt Verf. Verw% Eingehängt auf
>>>> /dev/mapper/onn_ovirt--n1-ovirt--node--ng--4.2.6--0.20180903.0+1  183G
>>>>   3,9G  170G    3% /
>>>> devtmpfs                                                           95G
>>>>      0   95G    0% /dev
>>>> tmpfs                                                              95G
>>>>    16K   95G    1% /dev/shm
>>>> tmpfs                                                              95G
>>>>    18M   95G    1% /run
>>>> tmpfs                                                              95G
>>>>      0   95G    0% /sys/fs/cgroup
>>>> /dev/mapper/onn_ovirt--n1-var                                      15G
>>>>   227M   14G    2% /var
>>>> /dev/mapper/3600605b002de9bc022421ae9422f57ba2                    976M
>>>>   417M  493M   46% /boot
>>>> /dev/mapper/onn_ovirt--n1-tmp                                     976M
>>>>   4,1M  905M    1% /tmp
>>>> /dev/mapper/onn_ovirt--n1-home                                    976M
>>>>   2,6M  907M    1% /home
>>>> /dev/mapper/3600605b002de9bc022421ae9422f57ba1                    200M
>>>>   9,8M  191M    5% /boot/efi
>>>> /dev/mapper/onn_ovirt--n1-var_log                                 7,8G
>>>>   230M  7,2G    4% /var/log
>>>> /dev/mapper/onn_ovirt--n1-var_crash                               9,8G
>>>>    37M  9,2G    1% /var/crash
>>>> /dev/mapper/onn_ovirt--n1-var_log_audit                           2,0G
>>>>    39M  1,8G    3% /var/log/audit
>>>> gluster01.test.visa-ad.at:/st1
>>>> 805G     71G  734G    9%
>>>> /rhev/data-center/mnt/glusterSD/gluster01.test.visa-ad.at:_st1
>>>> glustermount:iso                                                   50G
>>>>    20G   30G   40% /rhev/data-center/mnt/glusterSD/glustermount:iso
>>>> glustermount:export                                               100G
>>>>   4,8G   96G    5% /rhev/data-center/mnt/glusterSD/glustermount:export
>>>> tmpfs                                                              19G
>>>>      0   19G    0% /run/user/0
>>>> [root@ovirt-n1 ~]# ll /boot
>>>> total 114984
>>>> drwxr-xr-x. 3 root root     4096 Sep 13 11:50 boom
>>>> -rw-r--r--. 1 root root   147859 Aug 15 00:02
>>>> config-3.10.0-862.11.6.el7.x86_64
>>>> drwxr-xr-x. 3 root root     4096 Sep 13 11:44 efi
>>>> -rw-r--r--. 1 root root   192572 Nov  5  2016 elf-memtest86+-5.01
>>>> drwxr-xr-x. 2 root root     4096 Sep 13 12:05 extlinux
>>>> drwxr-xr-x. 2 root root     4096 Sep 13 11:47 grub
>>>> drwx------. 5 root root     4096 Sep 24 09:20 grub2
>>>> -rw-r--r--. 1 root root 62743245 Sep 13 12:14
>>>> initramfs-3.10.0-862.11.6.el7.x86_64.img
>>>> -rw-------. 1 root root 17617243 Sep 24 09:21
>>>> initramfs-3.10.0-862.11.6.el7.x86_64kdump.img
>>>> -rw-------. 1 root root 26464659 Sep 13 11:55
>>>> initramfs-3.10.0-862.el7.x86_64.img
>>>> drwxr-xr-x. 3 root root     4096 Sep 13 11:50 loader
>>>> -rw-r--r--. 1 root root   190896 Nov  5  2016 memtest86+-5.01
>>>> -rw-r--r--. 1 root root   305158 Aug 15 00:05
>>>> symvers-3.10.0-862.11.6.el7.x86_64.gz
>>>> -rw-------. 1 root root  3414344 Aug 15 00:02
>>>> System.map-3.10.0-862.11.6.el7.x86_64
>>>> -rw-r--r--. 1 root root   357715 Apr 11 08:30 tboot.gz
>>>> -rw-r--r--. 1 root root    13502 Apr 11 08:30 tboot-syms
>>>> -rwxr-xr-x. 1 root root  6242208 Aug 15 00:02
>>>> vmlinuz-3.10.0-862.11.6.el7.x86_64
>>>> [root@ovirt-n1 ~]# imgbase layout
>>>> ovirt-node-ng-4.2.6-0.20180903.0
>>>>  +- ovirt-node-ng-4.2.6-0.20180903.0+1
>>>> ovirt-node-ng-4.2.6.1-0.20180913.0
>>>>  +- ovirt-node-ng-4.2.6.1-0.20180913.0+1
>>>> [root@ovirt-n1 ~]# imgbase w
>>>> You are on ovirt-node-ng-4.2.6.1-0.20180913.0+1
>>>> [root@ovirt-n1 ~]#
>>>>
>>>>
>>>> o.
>>>> _______________________________________________
>>>> Users mailing list -- [email protected]
>>>> To unsubscribe send an email to [email protected]
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/[email protected]/message/T53Q2VDZSGFZBEVG3XQTKKT4FBST7HLU/
>>>>
>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>>
>>> [email protected]
>>> <https://red.ht/sig>
>>> <https://www.redhat.com/en/events/red-hat-open-source-day-italia?sc_cid=701f2000000RgRyAAK>
>>>
>>
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/DFIDPYBMCJCT4BQZ266CCDHMGQEJP4H6/

Reply via email to