[ovirt-users] Migration problems

2024-04-11 Thread markeczzz
I am trying to migrate vm from one host to another.
I have already migrated 10 vm-s from that host to another, but 2 of them are 
having problems.
In the dashboard event logs i get this:
Migration failed due to an Error: Migration canceled (VM: Virtual-NS, Source: 
node2.ovirt.cluster.com, Destination: node3.ovirt.cluster.com).

Engine log:
2024-04-11 11:11:21,490+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-74) [] VM 
'c8e6aafe-1463-4db6-9d3b-76b234f9487d' is migrating to VDS 
'd90dced4-6715-41b6-953c-119c4133f9db'(node3.ovirt.cluster.com) ignoring it in 
the refresh until migration is done
2024-04-11 11:11:35,446+02 WARN  
[org.ovirt.engine.core.utils.virtiowin.VirtioWinReader] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] 
Directory '/usr/share/virtio-win' doesn't exist.
2024-04-11 11:11:36,521+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [] VM 
'c8e6aafe-1463-4db6-9d3b-76b234f9487d' is migrating to VDS 
'd90dced4-6715-41b6-953c-119c4133f9db'(node3.ovirt.cluster.com) ignoring it in 
the refresh until migration is done
2024-04-11 11:11:51,563+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-44) [] VM 
'c8e6aafe-1463-4db6-9d3b-76b234f9487d' is migrating to VDS 
'd90dced4-6715-41b6-953c-119c4133f9db'(node3.ovirt.cluster.com) ignoring it in 
the refresh until migration is done
2024-04-11 11:12:06,592+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-45) [] VM 
'c8e6aafe-1463-4db6-9d3b-76b234f9487d' is migrating to VDS 
'd90dced4-6715-41b6-953c-119c4133f9db'(node3.ovirt.cluster.com) ignoring it in 
the refresh until migration is done
2024-04-11 11:12:21,625+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-8) [] VM 
'c8e6aafe-1463-4db6-9d3b-76b234f9487d' is migrating to VDS 
'd90dced4-6715-41b6-953c-119c4133f9db'(node3.ovirt.cluster.com) ignoring it in 
the refresh until migration is done
2024-04-11 11:12:36,657+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-66) [] VM 
'c8e6aafe-1463-4db6-9d3b-76b234f9487d' is migrating to VDS 
'd90dced4-6715-41b6-953c-119c4133f9db'(node3.ovirt.cluster.com) ignoring it in 
the refresh until migration is done
2024-04-11 11:12:43,536+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-5) [681c3996] VM 'c8e6aafe-1463-4db6-9d3b-76b234f9487d' 
was reported as Down on VDS 
'd90dced4-6715-41b6-953c-119c4133f9db'(node3.ovirt.cluster.com)
2024-04-11 11:12:43,536+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-5) [681c3996] VM 
'c8e6aafe-1463-4db6-9d3b-76b234f9487d'(Virtual-NS) was unexpectedly detected as 
'Down' on VDS 'd90dced4-6715-41b6-953c-119c4133f9db'(node3.ovirt.cluster.com) 
(expected on 'c1f1069d-ed61-4ade-afc2-e6f649039386')
2024-04-11 11:12:43,536+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(ForkJoinPool-1-worker-5) [681c3996] START, DestroyVDSCommand(HostName = 
node3.ovirt.cluster.com, 
DestroyVmVDSCommandParameters:{hostId='d90dced4-6715-41b6-953c-119c4133f9db', 
vmId='c8e6aafe-1463-4db6-9d3b-76b234f9487d', secondsToWait='0', 
gracefully='false', reason='', ignoreNoVm='true'}), log id: 381766c1
2024-04-11 11:12:43,836+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(ForkJoinPool-1-worker-5) [681c3996] Failed to destroy VM 
'c8e6aafe-1463-4db6-9d3b-76b234f9487d' because VM does not exist, ignoring
2024-04-11 11:12:43,836+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(ForkJoinPool-1-worker-5) [681c3996] FINISH, DestroyVDSCommand, return: , log 
id: 381766c1
2024-04-11 11:12:43,836+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-5) [681c3996] VM 
'c8e6aafe-1463-4db6-9d3b-76b234f9487d'(Virtual-NS) was unexpectedly detected as 
'Down' on VDS 'd90dced4-6715-41b6-953c-119c4133f9db'(node3.ovirt.cluster.com) 
(expected on 'c1f1069d-ed61-4ade-afc2-e6f649039386')
2024-04-11 11:12:43,836+02 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-5) [681c3996] Migration of VM 'Virtual-NS' to host 
'node3.ovirt.cluster.com' failed: VM destroyed during the startup.
2024-04-11 11:12:43,842+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-1) [681c3996] VM 
'c8e6aafe-1463-4db6-9d3b-76b234f9487d'(Virtual-NS) moved from 'MigratingFrom' 
--> 'Up'
2024-04-11 11:12:43,842+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-1) [681

[ovirt-users] Ovirt Hyperconverged

2024-04-11 Thread eevans--- via Users
CentOS 7 3 servers

Gluster deployed without a problem.
Hosted engine deployment failsL

[ ERROR ] fatal: [localhost]: FAILED! => {"reason": "conflicting action 
statements: fail, msg\n\nThe error appears to be in 
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/pre_checks/validate_mac_address.yml':
 line 14, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n register: 
he_vm_mac_addr\n - name: Fail if MAC address structure is incorrect\n ^ here\n"}

I'm not dsure why this error occurs. I tried the same file from a different 
server and even tried a different server. 

Something is wrong with this file structure.

Anyhelp is appreviated.

Eric Evans
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CXJJO3OIIN3735BUTPXALKWVFV5M7XXZ/


[ovirt-users] Re: HPE Oneview KVM appliance 8.8.0 / 8.7.0

2024-04-11 Thread Angus Clarke
Hi Gianluca



Thank you for the detailed instructions - these were excellent, I wasn't aware 
of the "lsinitrd" command before now - thanks!



My VM still sticks at the same point when booting with the virtio-scsi 
configuration. Meh!



I'm encouraged that the image booted ok in your environment => points to 
something specific to my environment.



I've raised a case with Oracle as we are using OLVM. I don't think they'll take 
an interest, let's see. If I get anywhere I'll report back here for the record.



Thanks again

Angus







 On Wed, 10 Apr 2024 23:59:22 +0200 Gianluca Cecchi 
 wrote ---



On Wed, Apr 10, 2024 at 12:29 PM Angus Clarke  wrote:

Hi Gianluca



The software is free from HPE but requires a login, I've shared a link 
separately.



Thanks for taking an interest



Regards

Angus






Apart from other considerations we are privately sharing, in my env that is 
based on Cascade Lake cpu on the host, with local storage domain on filesystem, 
the appliance is able to boot and complete the initial configuration phase 
using your settings: Chipset i440FX w/Bios for the IDE disk type, OS: RHEL7 
x86_64. In my env graphics protocol=VNC, video type=VGA

The constraint for your tweaks is caused by the appliance's operating system 
where all the virtio modules are compiled as modules and they are not included 
into the initramfs. 

So the system doesn't find the boot disk if you set it as virtio or virtio-scsi.

The layout is of bios type with one partition for /boot and other filesystems 
on LVM, / included.

To modify the qcow2 image you can use some tools out there, or use manual steps 
this way:



. connect the disk to an existing rhel 7 / CentOS 7 helper VM where you have 
lvm2 package installed

In my case my VM has one disk named /dev/sda and the HPE qcow2 disk when added 
is then seen as /dev/sdb and its partitions as /dev/sdb1, ...

IMPORTANT: change the disk names below as it appears the appliance disk in your 
env, otherwise you risk to compromise your existing data!!!



IMPORTANT: inside the appliance disk there is a volume group named vg01. Verify 
there is no vg01 volume group already defined in your helper VM otherwise you 
get into troubles



. connect to the helper VM as root user



. the LVM structure of the added disk (PV/VG/LV) should be automatically 
detected

run the command "vgs" and you should see vg01 volume group listed

run the command "lvs vg01" and you should see some logical volumes listed 





. mount the root filesystem of the appliance disk on a directory in your helper 
VM (on /media directory in my case)

# mount /dev/vg01/lv_root /media/



. mount the /boot filesystem of the appliance disk under /media/boot

# mount /dev/sdb1 /media/boot/



.  mount the /var filesystem of the appliance disk under /media/var

# mount /dev/vg01/lv_var /media/var/



. chroot into the appliance disk env

# chroot /media



. create a file with new kernel driver modules you want to include in the new 
initramfs

# vi /etc/dracut.conf.d/virtio.conf

its contents have to be this one line below (similar to the already present 
platform.conf):
# cat /etc/dracut.conf.d/virtio.conf
add_drivers+="virtio virtio_blk virtio_scsi"



. backup the original initramfs

# cp -p /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 
/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.bak



. replace the initramfs

# dracut -fv /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 
3.10.0-1062.1.2.el7.x86_64
...
*** Creating image file done ***
*** Creating initramfs image file 
'/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img' done ***
# 

. verify the new contents include virtio modules

# lsinitrd /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img | grep virtio
-rw-r--r--   1 root     root         7876 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/block/virtio_blk.ko.xz
-rw-r--r--   1 root     root        12972 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/char/virtio_console.ko.xz
-rw-r--r--   1 root     root        14304 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/net/virtio_net.ko.xz
-rw-r--r--   1 root     root         8188 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko.xz
drwxr-xr-x   2 root     root            0 Apr 10 21:14 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio
-rw-r--r--   1 root     root         4552 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio.ko.xz
-rw-r--r--   1 root     root         9904 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_pci.ko.xz
-rw-r--r--   1 root     root         8332 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_ring.ko.xz



. exit the chroot environment

# exit 

. Now you exited from the chroot env, umount the appliance disk filesystems
# umount /media/var /media/boot
# umount /m

[ovirt-users] Re: [External] : Re: HPE Oneview KVM appliance 8.8.0 / 8.7.0

2024-04-11 Thread Angus Clarke
Done, thanks Simon 👍







 On Thu, 11 Apr 2024 15:10:48 +0200 Simon Coter  
wrote ---



Hi Angus, 

we could try to do our best, even if this one is an appliance coming from HPE.

It could also help if you share access to the appliance to me as well as on the 
SR opened.

And, please, share the SR number you created.

Thanks



Simon
 
On Apr 11, 2024, at 2:42 PM, Angus Clarke  wrote:


Hi Gianluca



Thank you for the detailed instructions - these were excellent, I wasn't aware 
of the "lsinitrd" command before now - thanks!



My VM still sticks at the same point when booting with the virtio-scsi 
configuration. Meh!



I'm encouraged that the image booted ok in your environment => points to 
something specific to my environment.



I've raised a case with Oracle as we are using OLVM. I don't think they'll take 
an interest, let's see. If I get anywhere I'll report back here for the record.



Thanks again

Angus







 On Wed, 10 Apr 2024 23:59:22 +0200 Gianluca Cecchi 
 wrote ---



On Wed, Apr 10, 2024 at 12:29 PM Angus Clarke  wrote:

Hi Gianluca



The software is free from HPE but requires a login, I've shared a link 
separately.



Thanks for taking an interest



Regards

Angus






Apart from other considerations we are privately sharing, in my env that is 
based on Cascade Lake cpu on the host, with local storage domain on filesystem, 
the appliance is able to boot and complete the initial configuration phase 
using your settings:
 Chipset i440FX w/Bios for the IDE disk type, OS: RHEL7 x86_64. In my env 
graphics protocol=VNC, video type=VGA

The constraint for your tweaks is caused by the appliance's operating system 
where all the virtio modules are compiled as modules and they are not included 
into the initramfs. 

So the system doesn't find the boot disk if you set it as virtio or virtio-scsi.

The layout is of bios type with one partition for /boot and other filesystems 
on LVM, / included.

To modify the qcow2 image you can use some tools out there, or use manual steps 
this way:



. connect the disk to an existing rhel 7 / CentOS 7 helper VM where you have 
lvm2 package installed

In my case my VM has one disk named /dev/sda and the HPE qcow2 disk when added 
is then seen as /dev/sdb and its partitions as /dev/sdb1, ...

IMPORTANT: change the disk names below as it appears the appliance disk in your 
env, otherwise you risk to compromise your existing data!!!



IMPORTANT: inside the appliance disk there is a volume group named vg01. Verify 
there is no vg01 volume group already defined in your helper VM otherwise you 
get into troubles



. connect to the helper VM as root user



. the LVM structure of the added disk (PV/VG/LV) should be automatically 
detected

run the command "vgs" and you should see vg01 volume group listed

run the command "lvs vg01" and you should see some logical volumes listed 





. mount the root filesystem of the appliance disk on a directory in your helper 
VM (on /media directory in my case)

# mount /dev/vg01/lv_root /media/



. mount the /boot filesystem of the appliance disk under /media/boot

# mount /dev/sdb1 /media/boot/



. mount the /var filesystem of the appliance disk under /media/var

# mount /dev/vg01/lv_var /media/var/



. chroot into the appliance disk env

# chroot /media



. create a file with new kernel driver modules you want to include in the new 
initramfs

# vi /etc/dracut.conf.d/virtio.conf
 
 its contents have to be this one line below (similar to the already present 
platform.conf):
 # cat /etc/dracut.conf.d/virtio.conf
 add_drivers+="virtio virtio_blk virtio_scsi"



. backup the original initramfs

# cp -p /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 
/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.bak



. replace the initramfs

# dracut -fv /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 
3.10.0-1062.1.2.el7.x86_64
 ...
 *** Creating image file done ***
 *** Creating initramfs image file 
'/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img' done ***
 # 
 
 . verify the new contents include virtio modules
 
 # lsinitrd /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img | grep virtio
 -rw-r--r--   1 root     root         7876 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/block/virtio_blk.ko.xz
 -rw-r--r--   1 root     root        12972 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/char/virtio_console.ko.xz
 -rw-r--r--   1 root     root        14304 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/net/virtio_net.ko.xz
 -rw-r--r--   1 root     root         8188 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko.xz
 drwxr-xr-x   2 root     root            0 Apr 10 21:14 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio
 -rw-r--r--   1 root     root         4552 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/ker

[ovirt-users] Re: [External] : Re: HPE Oneview KVM appliance 8.8.0 / 8.7.0

2024-04-11 Thread Simon Coter via Users
Hi Angus,

we could try to do our best, even if this one is an appliance coming from HPE.
It could also help if you share access to the appliance to me as well as on the 
SR opened.
And, please, share the SR number you created.
Thanks

Simon

On Apr 11, 2024, at 2:42 PM, Angus Clarke  wrote:

Hi Gianluca

Thank you for the detailed instructions - these were excellent, I wasn't aware 
of the "lsinitrd" command before now - thanks!

My VM still sticks at the same point when booting with the virtio-scsi 
configuration. Meh!

I'm encouraged that the image booted ok in your environment => points to 
something specific to my environment.

I've raised a case with Oracle as we are using OLVM. I don't think they'll take 
an interest, let's see. If I get anywhere I'll report back here for the record.

Thanks again
Angus



 On Wed, 10 Apr 2024 23:59:22 +0200 Gianluca Cecchi 
 wrote ---

On Wed, Apr 10, 2024 at 12:29 PM Angus Clarke 
mailto:an...@ajct.uk>> wrote:
Hi Gianluca

The software is free from HPE but requires a login, I've shared a link 
separately.

Thanks for taking an interest

Regards
Angus

Apart from other considerations we are privately sharing, in my env that is 
based on Cascade Lake cpu on the host, with local storage domain on filesystem, 
the appliance is able to boot and complete the initial configuration phase 
using your settings: Chipset i440FX w/Bios for the IDE disk type, OS: RHEL7 
x86_64. In my env graphics protocol=VNC, video type=VGA
The constraint for your tweaks is caused by the appliance's operating system 
where all the virtio modules are compiled as modules and they are not included 
into the initramfs.
So the system doesn't find the boot disk if you set it as virtio or virtio-scsi.
The layout is of bios type with one partition for /boot and other filesystems 
on LVM, / included.
To modify the qcow2 image you can use some tools out there, or use manual steps 
this way:

. connect the disk to an existing rhel 7 / CentOS 7 helper VM where you have 
lvm2 package installed
In my case my VM has one disk named /dev/sda and the HPE qcow2 disk when added 
is then seen as /dev/sdb and its partitions as /dev/sdb1, ...
IMPORTANT: change the disk names below as it appears the appliance disk in your 
env, otherwise you risk to compromise your existing data!!!

IMPORTANT: inside the appliance disk there is a volume group named vg01. Verify 
there is no vg01 volume group already defined in your helper VM otherwise you 
get into troubles

. connect to the helper VM as root user

. the LVM structure of the added disk (PV/VG/LV) should be automatically 
detected
run the command "vgs" and you should see vg01 volume group listed
run the command "lvs vg01" and you should see some logical volumes listed


. mount the root filesystem of the appliance disk on a directory in your helper 
VM (on /media directory in my case)
# mount /dev/vg01/lv_root /media/

. mount the /boot filesystem of the appliance disk under /media/boot
# mount /dev/sdb1 /media/boot/

. mount the /var filesystem of the appliance disk under /media/var
# mount /dev/vg01/lv_var /media/var/

. chroot into the appliance disk env
# chroot /media

. create a file with new kernel driver modules you want to include in the new 
initramfs
# vi /etc/dracut.conf.d/virtio.conf

its contents have to be this one line below (similar to the already present 
platform.conf):
# cat /etc/dracut.conf.d/virtio.conf
add_drivers+="virtio virtio_blk virtio_scsi"

. backup the original initramfs
# cp -p /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 
/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.bak

. replace the initramfs
# dracut -fv /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 
3.10.0-1062.1.2.el7.x86_64
...
*** Creating image file done ***
*** Creating initramfs image file 
'/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img' done ***
#

. verify the new contents include virtio modules

# lsinitrd /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img | grep virtio
-rw-r--r--   1 root root 7876 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/block/virtio_blk.ko.xz
-rw-r--r--   1 root root12972 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/char/virtio_console.ko.xz
-rw-r--r--   1 root root14304 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/net/virtio_net.ko.xz
-rw-r--r--   1 root root 8188 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko.xz
drwxr-xr-x   2 root root0 Apr 10 21:14 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio
-rw-r--r--   1 root root 4552 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio.ko.xz
-rw-r--r--   1 root root 9904 Sep 30  2019 
usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_pci.ko.xz
-rw-r--r--   1 root root 8332 Sep 30  2019 
usr/lib/modules/3.10.0