I found `engine-setup --otopi-environment=OVESETUP_CONFIG/continueSetupOnHEVM=bool:True` from [1] and now have the ovirt-engine web interface reachable again. But do have one more question; when I try to change the Custom Chipset/Firmware Type to Q35 Chipset with BIOS, I get the error; HostedEngine: There was an attempt to change the Hosted Engine VM values that are locked.
How do I make the removal of the loader/nvram lines permanent? [1] https://lists.ovirt.org/archives/list/[email protected]/thread/2AC57LTHFKJBU6OYZPYSCMTBF6NE3QO2/ > On Jan 21, 2021, at 10:15, Joseph Gelinas <[email protected]> wrote: > > Removing those two lines got the hosted engine vm booting again, so that is a > great help. Thank you. > > Now I just need the web interface of ovirt-engine to work again. I feel like > I might have run things out of order and forgot to do `engine-setup` as part > of the update of hosted engine. Though when I try to do that now it bails out > claiming the cluster isn't in global maintenance yet it is. > > [ INFO ] Stage: Setup validation > [ ERROR ] It seems that you are running your engine inside of the > hosted-engine VM and are not in "Global Maintenance" mode. > In that case you should put the system into the "Global Maintenance" > mode before running engine-setup, or the hosted-engine HA agent might kill > the machine, which might corrupt your data. > > [ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine setup > detected, but Global Maintenance is not set. > > > I see engine.log says it can't contact the database but I certainly see > Postgres processes running. > > /var/log/ovirt-engine/engine.log > > 2021-01-21 14:47:31,502Z ERROR [org.ovirt.engine.core.services.HealthStatus] > (default task-15) [] Failed to run Health Status. > 2021-01-21 14:47:31,502Z ERROR [org.ovirt.engine.core.services.HealthStatus] > (default task-14) [] Unable to contact Database!: > java.lang.InterruptedException > > > > >> On Jan 21, 2021, at 03:19, Arik Hadas <[email protected]> wrote: >> >> >> >> On Thu, Jan 21, 2021 at 8:57 AM Joseph Gelinas <[email protected]> wrote: >> Hi, >> >> I recently did some updates of ovirt from 4.4.1 or 4.4.3 to 4.4.4, also >> setting the default datacenter from 4.4 to 4.5 and making the default bios >> q35+eufi. Unfortunately quite a few things. Now however hosted engine >> doesn't boot up anymore and `hosted-engine --console` just shows the below >> bios/firmware output: >> >> RHEL >> >> RHEL-8.1.0 PC (Q35 + ICH9, 2009) 2.00 GHz >> >> 0.0.0 16384 MB RAM >> >> >> >> >> Select Language <Standard English> This is the option >> >> one adjusts to >> change >>> Device Manager the language for the >>> >>> Boot Manager current system >>> >>> Boot Maintenance Manager >>> >> >> Continue >> >> Reset >> >> >> >> >> >> >> >> >> ^v=Move Highlight <Enter>=Select Entry >> >> >> >> When in this state `hosted-engine --vm-status` says it is up but failed >> liveliness check >> >> hosted-engine --vm-status | grep -i engine\ status >> Engine status : {"vm": "down", "health": "bad", >> "detail": "unknown", "reason": "vm not running on this host"} >> Engine status : {"vm": "up", "health": "bad", "detail": >> "Up", "reason": "failed liveliness check"} >> Engine status : {"vm": "down", "health": "bad", >> "detail": "Down", "reason": "bad vm status"} >> >> I assume I am running into https://access.redhat.com/solutions/5341561 (RHV: >> Hosted-Engine VM fails to start after changing the cluster to Q35/UEFI) >> however how to fix that isn't really described. I have tried starting hosted >> engine paused (`hosted-engine --vm-start-paused`) and editing the config >> (`virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf >> edit HostedEngine`) to have pc-i440fx instead and removing a bunch of pcie >> lines etc until it will accept the config and then resuming hosted engine >> (`virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf >> resume HostedEngine`) but haven't come up with something that is able to >> start. >> >> Anyone know how to resolve this? Am I even chasing the right path? >> >> Let's start with the negative - this should have been prevented by [1]. >> Can it be that the custom bios type that the hosted engine VM was set with >> was manually dropped in this environment? >> >> The positive is that the VM starts. This means that from the chipset >> perspective, the configuration is valid. >> So I wouldn't try to change it to i440fx, but only to switch the firmware to >> BIOS. >> I think that removing the following lines from the domain xml should do it: >> <loader readonly='yes' secure='no' >> type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader> >> <nvram >> template='/usr/share/OVMF/OVMF_VARS.fd'>/var/lib/libvirt/qemu/nvram/81816cd3-5816-4185-b553-b5a636156fbd.fd</nvram> >> Can you give this a try? >> >> [1] https://gerrit.ovirt.org/#/c/ovirt-engine/+/111159/ >> >> >> >> /var/log/libvirt/qemu/HostedEngine.log >> >> 2021-01-20 15:31:56.500+0000: starting up libvirt version: 6.6.0, package: >> 7.1.el8 (CBS <[email protected]>, 2020-12-10-14:05:40, ), qemu version: >> 5.1.0qemu-kvm-5.1.0-14.el8.1, kernel: 4.18.0-240.1.1.el8_3.x86_64, hostname: >> ovirt-3 >> LC_ALL=C \ >> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \ >> HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine \ >> XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.local/share \ >> XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.cache \ >> XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.config \ >> QEMU_AUDIO_DRV=spice \ >> /usr/libexec/qemu-kvm \ >> -name guest=HostedEngine,debug-threads=on \ >> -S \ >> -object >> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-25-HostedEngine/master-key.aes >> \ >> -blockdev >> '{"driver":"file","filename":"/usr/share/OVMF/OVMF_CODE.secboot.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' >> \ >> -blockdev >> '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' >> \ >> -blockdev >> '{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/81816cd3-5816-4185-b553-b5a636156fbd.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' >> \ >> -blockdev >> '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' >> \ >> -machine >> pc-q35-rhel8.1.0,accel=kvm,usb=off,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format >> \ >> -cpu Cascadelake-Server-noTSX,mpx=off \ >> -m size=16777216k,slots=16,maxmem=67108864k \ >> -overcommit mem-lock=off \ >> -smp 4,maxcpus=64,sockets=16,dies=1,cores=4,threads=1 \ >> -object iothread,id=iothread1 \ >> -numa node,nodeid=0,cpus=0-63,mem=16384 \ >> -uuid 81816cd3-5816-4185-b553-b5a636156fbd \ >> -smbios >> type=1,manufacturer=oVirt,product=RHEL,version=8-1.2011.el8,serial=4c4c4544-0051-3710-8032-c8c04f483633,uuid=81816cd3-5816-4185-b553-b5a636156fbd,family=oVirt >> \ >> -no-user-config \ >> -nodefaults \ >> -device sga \ >> -chardev socket,id=charmonitor,fd=47,server,nowait \ >> -mon chardev=charmonitor,id=monitor,mode=control \ >> -rtc base=2021-01-20T15:31:56,driftfix=slew \ >> -global kvm-pit.lost_tick_policy=delay \ >> -no-hpet \ >> -no-reboot \ >> -global ICH9-LPC.disable_s3=1 \ >> -global ICH9-LPC.disable_s4=1 \ >> -boot strict=on \ >> -device >> pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 >> \ >> -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ >> -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ >> -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ >> -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ >> -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \ >> -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \ >> -device pcie-root-port,port=0x17,chassis=8,id=pci.8,bus=pcie.0,addr=0x2.0x7 \ >> -device >> pcie-root-port,port=0x18,chassis=9,id=pci.9,bus=pcie.0,multifunction=on,addr=0x3 >> \ >> -device >> pcie-root-port,port=0x19,chassis=10,id=pci.10,bus=pcie.0,addr=0x3.0x1 \ >> -device >> pcie-root-port,port=0x1a,chassis=11,id=pci.11,bus=pcie.0,addr=0x3.0x2 \ >> -device >> pcie-root-port,port=0x1b,chassis=12,id=pci.12,bus=pcie.0,addr=0x3.0x3 \ >> -device >> pcie-root-port,port=0x1c,chassis=13,id=pci.13,bus=pcie.0,addr=0x3.0x4 \ >> -device >> pcie-root-port,port=0x1d,chassis=14,id=pci.14,bus=pcie.0,addr=0x3.0x5 \ >> -device >> pcie-root-port,port=0x1e,chassis=15,id=pci.15,bus=pcie.0,addr=0x3.0x6 \ >> -device >> pcie-root-port,port=0x1f,chassis=16,id=pci.16,bus=pcie.0,addr=0x3.0x7 \ >> -device pcie-root-port,port=0x20,chassis=17,id=pci.17,bus=pcie.0,addr=0x4 \ >> -device pcie-pci-bridge,id=pci.18,bus=pci.1,addr=0x0 \ >> -device >> qemu-xhci,p2=8,p3=8,id=ua-5a52e9e5-0726-4393-b91c-1c76e76c9ac1,bus=pci.3,addr=0x0 >> \ >> -device >> virtio-scsi-pci,iothread=iothread1,id=ua-7127a708-0d2a-42f3-97e4-fc314703f96f,bus=pci.4,addr=0x0 >> \ >> -device >> virtio-serial-pci,id=ua-e654d96c-8a11-42a0-9c83-6dda18d6052e,max_ports=16,bus=pci.5,addr=0x0 >> \ >> -device >> ide-cd,bus=ide.2,id=ua-7653b07c-61d5-4982-95bd-69147c4a2e54,werror=report,rerror=report >> \ >> -blockdev >> '{"driver":"file","filename":"/run/vdsm/storage/634fd4e4-2cc0-42fb-a92f-63223f25a339/105c32f2-c14e-474c-920e-6507e47cc28d/a5047f29-82fe-41a0-b170-3c3592df46be","aio":"threads","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' >> \ >> -blockdev >> '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' >> \ >> -device >> virtio-blk-pci,iothread=iothread1,bus=pci.6,addr=0x0,drive=libvirt-1-format,id=ua-105c32f2-c14e-474c-920e-6507e47cc28d,bootindex=1,write-cache=on,serial=105c32f2-c14e-474c-920e-6507e47cc28d,werror=stop,rerror=stop >> \ >> -netdev >> tap,fds=53:54:55:56,id=hostua-972a1ee9-25eb-4613-aac2-4996a7a28fff,vhost=on,vhostfds=57:58:59:60 >> \ >> -device >> virtio-net-pci,mq=on,vectors=10,host_mtu=1500,netdev=hostua-972a1ee9-25eb-4613-aac2-4996a7a28fff,id=ua-972a1ee9-25eb-4613-aac2-4996a7a28fff,mac=00:16:3e:6e:da:39,bus=pci.2,addr=0x0 >> \ >> -chardev socket,id=charserial0,fd=61,server,nowait \ >> -device isa-serial,chardev=charserial0,id=serial0 \ >> -chardev socket,id=charchannel0,fd=62,server,nowait \ >> -device >> virtserialport,bus=ua-e654d96c-8a11-42a0-9c83-6dda18d6052e.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 >> \ >> -chardev spicevmc,id=charchannel1,name=vdagent \ >> -device >> virtserialport,bus=ua-e654d96c-8a11-42a0-9c83-6dda18d6052e.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 >> \ >> -chardev socket,id=charchannel2,fd=63,server,nowait \ >> -device >> virtserialport,bus=ua-e654d96c-8a11-42a0-9c83-6dda18d6052e.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0 >> \ >> -device >> usb-tablet,id=input0,bus=ua-5a52e9e5-0726-4393-b91c-1c76e76c9ac1.0,port=1 \ >> -vnc 10.11.24.20:14,password \ >> -k en-us \ >> -spice >> port=5915,tls-port=5916,addr=10.11.24.20,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on >> \ >> -device >> qxl-vga,id=ua-c4d51e81-5bb4-4211-a00c-3d7ab431fef2,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 >> \ >> -device >> intel-hda,id=ua-68071572-175c-4af5-95f9-29f7e407e700,bus=pci.18,addr=0x1 \ >> -device >> hda-duplex,id=ua-68071572-175c-4af5-95f9-29f7e407e700-codec0,bus=ua-68071572-175c-4af5-95f9-29f7e407e700.0,cad=0 >> \ >> -device >> virtio-balloon-pci,id=ua-9d18ed17-c563-4f0a-b946-3d9d664a55e1,bus=pci.7,addr=0x0 >> \ >> -object >> rng-random,id=objua-0b27484d-b9b4-4372-b334-adcf8d3fc1eb,filename=/dev/urandom >> \ >> -device >> virtio-rng-pci,rng=objua-0b27484d-b9b4-4372-b334-adcf8d3fc1eb,id=ua-0b27484d-b9b4-4372-b334-adcf8d3fc1eb,bus=pci.8,addr=0x0 >> \ >> -device vmcoreinfo \ >> -sandbox >> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ >> -msg timestamp=on >> 2021-01-20T15:31:56.621216Z qemu-kvm: -numa >> node,nodeid=0,cpus=0-63,mem=16384: warning: Parameter -numa node,mem is >> deprecated, use -numa node,memdev instead >> >> >> /etc/libvirt/qemu/HostedEngine.xml >> <!-- >> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE >> OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: >> virsh edit HostedEngine >> or other application using the libvirt API. >> --> >> >> <domain type='kvm'> >> <name>HostedEngine</name> >> <uuid>81816cd3-5816-4185-b553-b5a636156fbd</uuid> >> <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" >> xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> >> <ns0:qos/> >> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> >> <ovirt-vm:balloonTarget type="int">16777216</ovirt-vm:balloonTarget> >> <ovirt-vm:clusterVersion>4.5</ovirt-vm:clusterVersion> >> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> >> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> >> <ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize> >> <ovirt-vm:minGuaranteedMemoryMb >> type="int">1024</ovirt-vm:minGuaranteedMemoryMb> >> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> >> <ovirt-vm:startTime type="float">1611156714.4983754</ovirt-vm:startTime> >> <ovirt-vm:device mac_address="00:16:3e:6e:da:39"/> >> <ovirt-vm:device devtype="disk" name="vda"> >> >> <ovirt-vm:domainID>634fd4e4-2cc0-42fb-a92f-63223f25a339</ovirt-vm:domainID> >> >> <ovirt-vm:imageID>105c32f2-c14e-474c-920e-6507e47cc28d</ovirt-vm:imageID> >> >> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID> >> <ovirt-vm:shared>exclusive</ovirt-vm:shared> >> >> <ovirt-vm:volumeID>a5047f29-82fe-41a0-b170-3c3592df46be</ovirt-vm:volumeID> >> </ovirt-vm:device> >> </ovirt-vm:vm> >> </metadata> >> <maxMemory slots='16' unit='KiB'>67108864</maxMemory> >> <memory unit='KiB'>16777216</memory> >> <currentMemory unit='KiB'>16777216</currentMemory> >> <vcpu placement='static' current='4'>64</vcpu> >> <iothreads>1</iothreads> >> <sysinfo type='smbios'> >> <system> >> <entry name='manufacturer'>oVirt</entry> >> <entry name='product'>RHEL</entry> >> <entry name='version'>8-1.2011.el8</entry> >> <entry name='serial'>4c4c4544-0051-3710-8032-c8c04f483633</entry> >> <entry name='uuid'>81816cd3-5816-4185-b553-b5a636156fbd</entry> >> <entry name='family'>oVirt</entry> >> </system> >> </sysinfo> >> <os> >> <type arch='x86_64' machine='pc-q35-rhel8.1.0'>hvm</type> >> <loader readonly='yes' secure='no' >> type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader> >> <nvram >> template='/usr/share/OVMF/OVMF_VARS.fd'>/var/lib/libvirt/qemu/nvram/81816cd3-5816-4185-b553-b5a636156fbd.fd</nvram> >> <boot dev='hd'/> >> <bios useserial='yes'/> >> <smbios mode='sysinfo'/> >> </os> >> <features> >> <acpi/> >> <vmcoreinfo state='on'/> >> </features> >> <cpu mode='custom' match='exact' check='partial'> >> <model fallback='allow'>Cascadelake-Server-noTSX</model> >> <topology sockets='16' dies='1' cores='4' threads='1'/> >> <feature policy='disable' name='mpx'/> >> <numa> >> <cell id='0' cpus='0-63' memory='16777216' unit='KiB'/> >> </numa> >> </cpu> >> <clock offset='variable' adjustment='0' basis='utc'> >> <timer name='rtc' tickpolicy='catchup'/> >> <timer name='pit' tickpolicy='delay'/> >> <timer name='hpet' present='no'/> >> </clock> >> <on_poweroff>destroy</on_poweroff> >> <on_reboot>destroy</on_reboot> >> <on_crash>destroy</on_crash> >> <pm> >> <suspend-to-mem enabled='no'/> >> <suspend-to-disk enabled='no'/> >> </pm> >> <devices> >> <emulator>/usr/libexec/qemu-kvm</emulator> >> <disk type='file' device='cdrom'> >> <driver name='qemu' type='raw' error_policy='report'/> >> <source startupPolicy='optional'> >> <seclabel model='dac' relabel='no'/> >> </source> >> <target dev='sdc' bus='sata'/> >> <readonly/> >> <alias name='ua-7653b07c-61d5-4982-95bd-69147c4a2e54'/> >> <address type='drive' controller='0' bus='0' target='0' unit='2'/> >> </disk> >> <disk type='file' device='disk' snapshot='no'> >> <driver name='qemu' type='raw' cache='none' error_policy='stop' >> io='threads' iothread='1'/> >> <source >> file='/run/vdsm/storage/634fd4e4-2cc0-42fb-a92f-63223f25a339/105c32f2-c14e-474c-920e-6507e47cc28d/a5047f29-82fe-41a0-b170-3c3592df46be'> >> <seclabel model='dac' relabel='no'/> >> </source> >> <target dev='vda' bus='virtio'/> >> <serial>105c32f2-c14e-474c-920e-6507e47cc28d</serial> >> <alias name='ua-105c32f2-c14e-474c-920e-6507e47cc28d'/> >> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' >> function='0x0'/> >> </disk> >> <controller type='pci' index='1' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='1' port='0x10'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' >> function='0x0' multifunction='on'/> >> </controller> >> <controller type='pci' index='2' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='2' port='0x11'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' >> function='0x1'/> >> </controller> >> <controller type='pci' index='3' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='3' port='0x12'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' >> function='0x2'/> >> </controller> >> <controller type='pci' index='4' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='4' port='0x13'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' >> function='0x3'/> >> </controller> >> <controller type='pci' index='5' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='5' port='0x14'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' >> function='0x4'/> >> </controller> >> <controller type='pci' index='6' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='6' port='0x15'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' >> function='0x5'/> >> </controller> >> <controller type='pci' index='7' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='7' port='0x16'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' >> function='0x6'/> >> </controller> >> <controller type='pci' index='8' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='8' port='0x17'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' >> function='0x7'/> >> </controller> >> <controller type='pci' index='9' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='9' port='0x18'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' >> function='0x0' multifunction='on'/> >> </controller> >> <controller type='pci' index='10' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='10' port='0x19'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' >> function='0x1'/> >> </controller> >> <controller type='pci' index='11' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='11' port='0x1a'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' >> function='0x2'/> >> </controller> >> <controller type='pci' index='12' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='12' port='0x1b'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' >> function='0x3'/> >> </controller> >> <controller type='pci' index='13' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='13' port='0x1c'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' >> function='0x4'/> >> </controller> >> <controller type='pci' index='14' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='14' port='0x1d'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' >> function='0x5'/> >> </controller> >> <controller type='pci' index='15' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='15' port='0x1e'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' >> function='0x6'/> >> </controller> >> <controller type='pci' index='16' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='16' port='0x1f'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' >> function='0x7'/> >> </controller> >> <controller type='pci' index='17' model='pcie-root-port'> >> <model name='pcie-root-port'/> >> <target chassis='17' port='0x20'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' >> function='0x0'/> >> </controller> >> <controller type='pci' index='18' model='pcie-to-pci-bridge'> >> <model name='pcie-pci-bridge'/> >> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' >> function='0x0'/> >> </controller> >> <controller type='pci' index='0' model='pcie-root'/> >> <controller type='sata' index='0'> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' >> function='0x2'/> >> </controller> >> <controller type='usb' index='0' model='qemu-xhci' ports='8'> >> <alias name='ua-5a52e9e5-0726-4393-b91c-1c76e76c9ac1'/> >> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' >> function='0x0'/> >> </controller> >> <controller type='scsi' index='0' model='virtio-scsi'> >> <driver iothread='1'/> >> <alias name='ua-7127a708-0d2a-42f3-97e4-fc314703f96f'/> >> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' >> function='0x0'/> >> </controller> >> <controller type='virtio-serial' index='0' ports='16'> >> <alias name='ua-e654d96c-8a11-42a0-9c83-6dda18d6052e'/> >> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' >> function='0x0'/> >> </controller> >> <lease> >> <lockspace>634fd4e4-2cc0-42fb-a92f-63223f25a339</lockspace> >> <key>a5047f29-82fe-41a0-b170-3c3592df46be</key> >> <target >> path='/rhev/data-center/mnt/glusterSD/ovirt-1:_engine/634fd4e4-2cc0-42fb-a92f-63223f25a339/images/105c32f2-c14e-474c-920e-6507e47cc28d/a5047f29-82fe-41a0-b170-3c3592df46be.lease'/> >> </lease> >> <interface type='bridge'> >> <mac address='00:16:3e:6e:da:39'/> >> <source bridge='ovirtmgmt'/> >> <model type='virtio'/> >> <driver name='vhost' queues='4'/> >> <link state='up'/> >> <mtu size='1500'/> >> <alias name='ua-972a1ee9-25eb-4613-aac2-4996a7a28fff'/> >> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' >> function='0x0'/> >> </interface> >> <serial type='unix'> >> <source mode='bind' >> path='/var/run/ovirt-vmconsole-console/81816cd3-5816-4185-b553-b5a636156fbd.sock'/> >> <target type='isa-serial' port='0'> >> <model name='isa-serial'/> >> </target> >> </serial> >> <console type='unix'> >> <source mode='bind' >> path='/var/run/ovirt-vmconsole-console/81816cd3-5816-4185-b553-b5a636156fbd.sock'/> >> <target type='serial' port='0'/> >> </console> >> <channel type='unix'> >> <source mode='bind' >> path='/var/lib/libvirt/qemu/channels/81816cd3-5816-4185-b553-b5a636156fbd.org.qemu.guest_agent.0'/> >> <target type='virtio' name='org.qemu.guest_agent.0'/> >> <address type='virtio-serial' controller='0' bus='0' port='1'/> >> </channel> >> <channel type='spicevmc'> >> <target type='virtio' name='com.redhat.spice.0'/> >> <address type='virtio-serial' controller='0' bus='0' port='2'/> >> </channel> >> <channel type='unix'> >> <source mode='bind' >> path='/var/lib/libvirt/qemu/channels/81816cd3-5816-4185-b553-b5a636156fbd.org.ovirt.hosted-engine-setup.0'/> >> <target type='virtio' name='org.ovirt.hosted-engine-setup.0'/> >> <address type='virtio-serial' controller='0' bus='0' port='3'/> >> </channel> >> <input type='tablet' bus='usb'> >> <address type='usb' bus='0' port='1'/> >> </input> >> <input type='mouse' bus='ps2'/> >> <input type='keyboard' bus='ps2'/> >> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us' >> passwd='*****' passwdValidTo='1970-01-01T00:00:01'> >> <listen type='network' network='vdsm-ovirtmgmt'/> >> </graphics> >> <graphics type='spice' autoport='yes' passwd='*****' >> passwdValidTo='1970-01-01T00:00:01'> >> <listen type='network' network='vdsm-ovirtmgmt'/> >> <channel name='main' mode='secure'/> >> <channel name='display' mode='secure'/> >> <channel name='inputs' mode='secure'/> >> <channel name='cursor' mode='secure'/> >> <channel name='playback' mode='secure'/> >> <channel name='record' mode='secure'/> >> <channel name='smartcard' mode='secure'/> >> <channel name='usbredir' mode='secure'/> >> </graphics> >> <sound model='ich6'> >> <alias name='ua-68071572-175c-4af5-95f9-29f7e407e700'/> >> <address type='pci' domain='0x0000' bus='0x12' slot='0x01' >> function='0x0'/> >> </sound> >> <video> >> <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' >> primary='yes'/> >> <alias name='ua-c4d51e81-5bb4-4211-a00c-3d7ab431fef2'/> >> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' >> function='0x0'/> >> </video> >> <memballoon model='virtio'> >> <stats period='5'/> >> <alias name='ua-9d18ed17-c563-4f0a-b946-3d9d664a55e1'/> >> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' >> function='0x0'/> >> </memballoon> >> <rng model='virtio'> >> <backend model='random'>/dev/urandom</backend> >> <alias name='ua-0b27484d-b9b4-4372-b334-adcf8d3fc1eb'/> >> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' >> function='0x0'/> >> </rng> >> </devices> >> </domain> >> >> >> _______________________________________________ >> Users mailing list -- [email protected] >> To unsubscribe send an email to [email protected] >> Privacy Statement: https://www.ovirt.org/privacy-policy.html >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/[email protected]/message/GR5L6KGGRAG5HAKVJWOVDKFQXF7GPPF7/ >> _______________________________________________ >> Users mailing list -- [email protected] >> To unsubscribe send an email to [email protected] >> Privacy Statement: https://www.ovirt.org/privacy-policy.html >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/[email protected]/message/ZBQTJDQNP7LLLLNP6PSGADTUYQ7LPFZP/ > _______________________________________________ Users mailing list -- [email protected] To unsubscribe send an email to [email protected] Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/[email protected]/message/SSSOQWBAQE367Q3T4GCZINUFDBGJWRVZ/

