When I move the VM disk to another storage it does a snapshot, but then it was 
not able to delete it 

Failed to delete snapshot 'Teste_Disco_Disk1 Auto-generated for Live Storage 
Migration' for VM 'Teste_Disco'. 
Possible failure while deleting Teste_Disco_Disk1 from the source Storage 
Domain GFS1iSCSI_4TB during the move operation. The Storage Domain may be 
manually cleaned-up from possible leftovers 
(User:admin@ovirt@internalkeycloak-authz). 

The VM disk is in the other disk and stays live. Didn't paused so far. 

Thanks 
José 


De: "Jean-Louis Dupond" <jean-lo...@dupond.be> 
Para: "suporte" <supo...@logicworks.pt> 
Cc: "Colin Coe" <colin....@gmail.com>, "users" <users@ovirt.org> 
Itens enviados: Quinta-feira, 5 de Dezembro de 2024 12:44:07 
Assunto: Re: [ovirt-users] Re: VM has been paused due to no Storage space 
error. 



And it stays paused? 
Why does the disk has a snapshot? 

And the dump-volume-chains should be of efebb0ca-83a5-40b0-8cf9-1ee1b50674c8, 
as the disk reside on that domain. 

Thanks 
Jean-Louis 
On 12/5/24 13:27, [ mailto:supo...@logicworks.pt | supo...@logicworks.pt ] 
wrote: 



Hi, 

We setup a new iSCSI storage with 16TB, plenty of space. 
Install a new VM (Ubuntu 24) with only one disk, 50GB, thin-provisioning 
After some minutes it pauses. 
Move the disk to another storage and we were able to start the VM. Don´t no for 
how long. 
Running on CentOS 8, Version 4.5.4-1.el8 

Did the same test in another environment: 
CentOS 9, Version 4.5.6-1.el9, whitout any problem 

# vdsm-tool dump-volume-chains 2ae7fdc6-d3e4-4b61-9cec-b7d819a4fb02 

Images volume chains (base volume first) 

image: dcd253fe-890d-4afd-9cb2-d45800f447f7 

- 03c20fe2-1274-4873-a0b4-7d751e666806 
status: OK, voltype: LEAF, format: RAW, legality: LEGAL, type: PREALLOCATED, 
capacity: 134217728, truesize: 134217728 


image: 05f63799-db5e-424b-bb75-822803867b83 

- 1ecda185-9639-49c2-ac0d-3acf47262025 
status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE, 
capacity: 268435456000, truesize: 225620000768 


image: a78ca2e3-0ea3-461b-ac1a-b589d082e914 

- 37120c67-919f-4f6d-afa8-0fff300308c5 
status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, type: SPARSE, 
capacity: 21474836480, truesize: 20266876928 

- 5879b376-f8ae-492b-b5c3-b6327abc0d4e 
status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE, 
capacity: 21474836480, truesize: 8858370048 


image: 4aaf8e07-077e-423b-a1ce-75c49a0af940 

- 46fda312-d6b1-49ef-adbe-04f307419c36 
status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE, 
capacity: 322122547200, truesize: 55431921664 


image: fa96460a-6f43-497c-bf53-46544aebfe13 

- 4ca5a23b-dc31-4102-bae5-e1241d3df8a6 
status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE, 
capacity: 42949672960, truesize: 47110422528 


image: f58f989f-24fe-4731-a27f-e5cd12350121 

- 513e54c7-8087-4b56-ab0d-5c675b78288d 
status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE, 
capacity: 26843545600, truesize: 14495514624 


image: 054d77dc-9067-4a33-bac4-48e688a3c6c7 

- 7f36baf0-41f5-46b9-b100-be9fabc956be 
status: OK, voltype: LEAF, format: RAW, legality: LEGAL, type: PREALLOCATED, 
capacity: 134217728, truesize: 134217728 


image: 81885b60-494a-4909-b479-9b40ccaeac45 

- aa2dcff2-e5a9-45b5-a436-19f758ff83da 
status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: PREALLOCATED, 
capacity: 322122547200, truesize: 322122547200 


image: 0254116f-edca-45b1-bfdb-5c9933b7fea6 

- bbbe77e1-eab2-4a23-ae84-6d02a4f3824f 
status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE, 
capacity: 268435456000, truesize: 162537668608 


image: 62ed0783-33e2-4871-ad73-8ed66c7d076a 

- c313d2db-7b4c-4d90-912a-175c0c724a6d 
status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE, 
capacity: 268435456000, truesize: 171261820928 


image: dc2ca30c-5c2d-4d34-9ed0-acfde4d87b2a 

- db355623-8588-4c8c-97d1-43bffcea73b7 
status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: PREALLOCATED, 
capacity: 85899345920, truesize: 85899345920 



# virsh --readonly dumpxml Teste_Disco 
setlocale: No such file or directory 
<domain type='kvm' id='99' xmlns:qemu=' [ 
http://libvirt.org/schemas/domain/qemu/1.0 | 
http://libvirt.org/schemas/domain/qemu/1.0 ] '> 
<name>Teste_Disco</name> 
<uuid>818a5c26-6581-4569-852c-23e94ebbd5e7</uuid> 
<metadata xmlns:ns1= [ http://ovirt.org/vm/tune/1.0 | 
"http://ovirt.org/vm/tune/1.0"; ] xmlns:ovirt-vm= [ http://ovirt.org/vm/1.0 | 
"http://ovirt.org/vm/1.0"; ] > 
<ns1:qos/> 
<ovirt-vm:vm xmlns:ovirt-vm= [ http://ovirt.org/vm/1.0 | 
"http://ovirt.org/vm/1.0"; ] > 
<ovirt-vm:balloonTarget type="int">1048576</ovirt-vm:balloonTarget> 
<ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled> 
<ovirt-vm:clusterVersion>4.7</ovirt-vm:clusterVersion> 
<ovirt-vm:cpuPolicy>none</ovirt-vm:cpuPolicy> 
<ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> 
<ovirt-vm:jobs>{}</ovirt-vm:jobs> 
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> 
<ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize> 
<ovirt-vm:minGuaranteedMemoryMb 
type="int">1024</ovirt-vm:minGuaranteedMemoryMb> 
<ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> 
<ovirt-vm:startTime type="float">1733399150.5427186</ovirt-vm:startTime> 
<ovirt-vm:device alias="ua-01640387-95f2-452b-905f-1ab4c8840324" 
mac_address="56:6f:fc:b2:00:1f"> 
<ovirt-vm:network>ZimbraOUT</ovirt-vm:network> 
</ovirt-vm:device> 
<ovirt-vm:device devtype="disk" name="sda"> 
<ovirt-vm:domainID>efebb0ca-83a5-40b0-8cf9-1ee1b50674c8</ovirt-vm:domainID> 
<ovirt-vm:imageID>006e28f0-19c9-4457-b050-62ff257625e9</ovirt-vm:imageID> 
<ovirt-vm:managed type="bool">False</ovirt-vm:managed> 
<ovirt-vm:poolID>21979e6a-900f-11ed-80d3-000c29a8c80d</ovirt-vm:poolID> 
<ovirt-vm:volumeID>f77445e9-fcd7-4a73-bb1f-b5e49b1d6a36</ovirt-vm:volumeID> 
<ovirt-vm:volumeChain> 
<ovirt-vm:volumeChainNode> 
<ovirt-vm:domainID>efebb0ca-83a5-40b0-8cf9-1ee1b50674c8</ovirt-vm:domainID> 
<ovirt-vm:imageID>006e28f0-19c9-4457-b050-62ff257625e9</ovirt-vm:imageID> 
<ovirt-vm:leaseOffset type="int">111149056</ovirt-vm:leaseOffset> 
<ovirt-vm:leasePath>/dev/efebb0ca-83a5-40b0-8cf9-1ee1b50674c8/leases</ovirt-vm:leasePath>
 
<ovirt-vm:path>/rhev/data-center/mnt/blockSD/efebb0ca-83a5-40b0-8cf9-1ee1b50674c8/images/006e28f0-19c9-4457-b050-62ff257625e9/aef15109-4c9f-45aa-b69a-7b6d95d85222</ovirt-vm:path>
 
<ovirt-vm:volumeID>aef15109-4c9f-45aa-b69a-7b6d95d85222</ovirt-vm:volumeID> 
</ovirt-vm:volumeChainNode> 
<ovirt-vm:volumeChainNode> 
<ovirt-vm:domainID>efebb0ca-83a5-40b0-8cf9-1ee1b50674c8</ovirt-vm:domainID> 
<ovirt-vm:imageID>006e28f0-19c9-4457-b050-62ff257625e9</ovirt-vm:imageID> 
<ovirt-vm:leaseOffset type="int">130023424</ovirt-vm:leaseOffset> 
<ovirt-vm:leasePath>/dev/efebb0ca-83a5-40b0-8cf9-1ee1b50674c8/leases</ovirt-vm:leasePath>
 
<ovirt-vm:path>/rhev/data-center/mnt/blockSD/efebb0ca-83a5-40b0-8cf9-1ee1b50674c8/images/006e28f0-19c9-4457-b050-62ff257625e9/f77445e9-fcd7-4a73-bb1f-b5e49b1d6a36</ovirt-vm:path>
 
<ovirt-vm:volumeID>f77445e9-fcd7-4a73-bb1f-b5e49b1d6a36</ovirt-vm:volumeID> 
</ovirt-vm:volumeChainNode> 
</ovirt-vm:volumeChain> 
</ovirt-vm:device> 
<ovirt-vm:device devtype="cdrom" name="sdc"> 
<ovirt-vm:volumeID>371245b9-cd8e-44eb-8adc-31e9fbcc44ff</ovirt-vm:volumeID> 
<ovirt-vm:volumeChain> 
<ovirt-vm:volumeChainNode> 
<ovirt-vm:domainID>efebb0ca-83a5-40b0-8cf9-1ee1b50674c8</ovirt-vm:domainID> 
<ovirt-vm:imageID>3994e0c3-6e77-4c93-9120-6d95bbf8c2a0</ovirt-vm:imageID> 
<ovirt-vm:leaseOffset type="int">127926272</ovirt-vm:leaseOffset> 
<ovirt-vm:leasePath>/dev/efebb0ca-83a5-40b0-8cf9-1ee1b50674c8/leases</ovirt-vm:leasePath>
 
<ovirt-vm:path>/rhev/data-center/mnt/blockSD/efebb0ca-83a5-40b0-8cf9-1ee1b50674c8/images/3994e0c3-6e77-4c93-9120-6d95bbf8c2a0/371245b9-cd8e-44eb-8adc-31e9fbcc44ff</ovirt-vm:path>
 
<ovirt-vm:volumeID>371245b9-cd8e-44eb-8adc-31e9fbcc44ff</ovirt-vm:volumeID> 
</ovirt-vm:volumeChainNode> 
</ovirt-vm:volumeChain> 
</ovirt-vm:device> 
</ovirt-vm:vm> 
</metadata> 
<maxMemory slots='16' unit='KiB'>4194304</maxMemory> 
<memory unit='KiB'>1048576</memory> 
<currentMemory unit='KiB'>1048576</currentMemory> 
<vcpu placement='static' current='1'>16</vcpu> 
<iothreads>1</iothreads> 
<cputune> 
<vcpupin vcpu='0' cpuset='0-47'/> 
</cputune> 
<resource> 
<partition>/machine</partition> 
</resource> 
<sysinfo type='smbios'> 
<system> 
<entry name='manufacturer'>oVirt</entry> 
<entry name='product'>RHEL</entry> 
<entry name='version'>8.7.2206.0-1.el8</entry> 
<entry name='serial'>32353550-3135-5a43-4a32-343530483757</entry> 
<entry name='uuid'>818a5c26-6581-4569-852c-23e94ebbd5e7</entry> 
<entry name='family'>oVirt</entry> 
</system> 
</sysinfo> 
<os> 
<type arch='x86_64' machine='pc-q35-rhel8.6.0'>hvm</type> 
<loader readonly='yes' secure='no' 
type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader> 
<nvram 
template='/usr/share/OVMF/OVMF_VARS.fd'>/var/lib/libvirt/qemu/nvram/818a5c26-6581-4569-852c-23e94ebbd5e7.fd</nvram>
 
<bootmenu enable='yes' timeout='30000'/> 
<smbios mode='sysinfo'/> 
</os> 
<features> 
<acpi/> 
<vmcoreinfo state='on'/> 
</features> 
<cpu mode='custom' match='exact' check='full'> 
<model fallback='forbid'>EPYC</model> 
<topology sockets='16' dies='1' cores='1' threads='1'/> 
<feature policy='require' name='ibpb'/> 
<feature policy='require' name='virt-ssbd'/> 
<feature policy='disable' name='monitor'/> 
<feature policy='require' name='x2apic'/> 
<feature policy='require' name='hypervisor'/> 
<feature policy='disable' name='svm'/> 
<feature policy='require' name='topoext'/> 
<numa> 
<cell id='0' cpus='0-15' memory='1048576' unit='KiB'/> 
</numa> 
</cpu> 
<clock offset='variable' adjustment='0' basis='utc'> 
<timer name='rtc' tickpolicy='catchup'/> 
<timer name='pit' tickpolicy='delay'/> 
<timer name='hpet' present='no'/> 
</clock> 
<on_poweroff>destroy</on_poweroff> 
<on_reboot>restart</on_reboot> 
<on_crash>destroy</on_crash> 
<pm> 
<suspend-to-mem enabled='no'/> 
<suspend-to-disk enabled='no'/> 
</pm> 
<devices> 
<emulator>/usr/libexec/qemu-kvm</emulator> 
<disk type='file' device='cdrom'> 
<driver name='qemu' type='raw' error_policy='report'/> 
<source index='3'/> 
<target dev='sdc' bus='sata'/> 
<readonly/> 
<boot order='1'/> 
<alias name='ua-c3c8ded0-7229-441e-a6ad-f2854c9104f3'/> 
<address type='drive' controller='0' bus='0' target='0' unit='2'/> 
</disk> 
<disk type='block' device='disk' snapshot='no'> 
<driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native'/> 
<source 
dev='/rhev/data-center/mnt/blockSD/efebb0ca-83a5-40b0-8cf9-1ee1b50674c8/images/006e28f0-19c9-4457-b050-62ff257625e9/f77445e9-fcd7-4a73-bb1f-b5e49b1d6a36'
 index='5'> 
<seclabel model='dac' relabel='no'/> 
</source> 
<backingStore type='block' index='6'> 
<format type='qcow2'/> 
<source 
dev='/rhev/data-center/mnt/blockSD/efebb0ca-83a5-40b0-8cf9-1ee1b50674c8/images/006e28f0-19c9-4457-b050-62ff257625e9/aef15109-4c9f-45aa-b69a-7b6d95d85222'>
 
<seclabel model='dac' relabel='no'/> 
</source> 
<backingStore/> 
</backingStore> 
<target dev='sda' bus='scsi'/> 
<serial>006e28f0-19c9-4457-b050-62ff257625e9</serial> 
<boot order='2'/> 
<alias name='ua-006e28f0-19c9-4457-b050-62ff257625e9'/> 
<address type='drive' controller='0' bus='0' target='0' unit='0'/> 
</disk> 
<controller type='usb' index='0' model='qemu-xhci' ports='8'> 
<alias name='ua-6ea483c0-d13d-43bb-9cdc-95c4219b58bd'/> 
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> 
</controller> 
<controller type='virtio-serial' index='0' ports='16'> 
<alias name='ua-8ac37f08-a78b-404c-9edb-b2cba37dfab4'/> 
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> 
</controller> 
<controller type='scsi' index='0' model='virtio-scsi'> 
<driver iothread='1'/> 
<alias name='ua-9cf06f2f-96e0-47b1-977d-eaaab78b501b'/> 
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> 
</controller> 
<controller type='pci' index='0' model='pcie-root'> 
<alias name='pcie.0'/> 
</controller> 
<controller type='pci' index='1' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='1' port='0x10'/> 
<alias name='pci.1'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' 
multifunction='on'/> 
</controller> 
<controller type='pci' index='2' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='2' port='0x11'/> 
<alias name='pci.2'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> 
</controller> 
<controller type='pci' index='3' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='3' port='0x12'/> 
<alias name='pci.3'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> 
</controller> 
<controller type='pci' index='4' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='4' port='0x13'/> 
<alias name='pci.4'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> 
</controller> 
<controller type='pci' index='5' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='5' port='0x14'/> 
<alias name='pci.5'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> 
</controller> 
<controller type='pci' index='6' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='6' port='0x15'/> 
<alias name='pci.6'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> 
</controller> 
<controller type='pci' index='7' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='7' port='0x16'/> 
<alias name='pci.7'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> 
</controller> 
<controller type='pci' index='8' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='8' port='0x17'/> 
<alias name='pci.8'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/> 
</controller> 
<controller type='pci' index='9' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='9' port='0x18'/> 
<alias name='pci.9'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' 
multifunction='on'/> 
</controller> 
<controller type='pci' index='10' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='10' port='0x19'/> 
<alias name='pci.10'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> 
</controller> 
<controller type='pci' index='11' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='11' port='0x1a'/> 
<alias name='pci.11'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> 
</controller> 
<controller type='pci' index='12' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='12' port='0x1b'/> 
<alias name='pci.12'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> 
</controller> 
<controller type='pci' index='13' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='13' port='0x1c'/> 
<alias name='pci.13'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/> 
</controller> 
<controller type='pci' index='14' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='14' port='0x1d'/> 
<alias name='pci.14'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/> 
</controller> 
<controller type='pci' index='15' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='15' port='0x1e'/> 
<alias name='pci.15'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/> 
</controller> 
<controller type='pci' index='16' model='pcie-root-port'> 
<model name='pcie-root-port'/> 
<target chassis='16' port='0x1f'/> 
<alias name='pci.16'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/> 
</controller> 
<controller type='sata' index='0'> 
<alias name='ide'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> 
</controller> 
<interface type='bridge'> 
<mac address='56:6f:fc:b2:00:1f'/> 
<source bridge='ZimbraOUT'/> 
<target dev='vnet98'/> 
<model type='virtio'/> 
<filterref filter='vdsm-no-mac-spoofing'/> 
<link state='up'/> 
<mtu size='1500'/> 
<boot order='3'/> 
<alias name='ua-01640387-95f2-452b-905f-1ab4c8840324'/> 
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> 
</interface> 
<channel type='unix'> 
<source mode='bind' 
path='/var/lib/libvirt/qemu/channels/818a5c26-6581-4569-852c-23e94ebbd5e7.ovirt-guest-agent.0'/>
 
<target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/> 
<alias name='channel0'/> 
<address type='virtio-serial' controller='0' bus='0' port='1'/> 
</channel> 
<channel type='unix'> 
<source mode='bind' 
path='/var/lib/libvirt/qemu/channels/818a5c26-6581-4569-852c-23e94ebbd5e7.org.qemu.guest_agent.0'/>
 
<target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> 
<alias name='channel1'/> 
<address type='virtio-serial' controller='0' bus='0' port='2'/> 
</channel> 
<input type='tablet' bus='usb'> 
<alias name='input0'/> 
<address type='usb' bus='0' port='1'/> 
</input> 
<input type='mouse' bus='ps2'> 
<alias name='input1'/> 
</input> 
<input type='keyboard' bus='ps2'> 
<alias name='input2'/> 
</input> 
<graphics type='vnc' port='5912' autoport='yes' listen='192.168.6.112' 
keymap='pt' passwdValidTo='2024-12-05T11:32:23'> 
<listen type='network' address='192.168.6.112' network='vdsm-ovirtmgmt'/> 
</graphics> 
<audio id='1' type='none'/> 
<video> 
<model type='virtio' vram='16384' heads='1' primary='yes'/> 
<alias name='ua-1a3762f1-ff7f-4fa9-9681-c6fea475a8dd'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> 
</video> 
<memballoon model='virtio'> 
<stats period='5'/> 
<alias name='ua-8304854e-731b-4b52-a588-8bf38cd8b804'/> 
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> 
</memballoon> 
<rng model='virtio'> 
<backend model='random'>/dev/urandom</backend> 
<alias name='ua-52a31dc2-d4bb-4315-8706-8200f80ac55c'/> 
<address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> 
</rng> 
</devices> 
<seclabel type='dynamic' model='selinux' relabel='yes'> 
<label>system_u:system_r:svirt_t:s0:c146,c484</label> 
<imagelabel>system_u:object_r:svirt_image_t:s0:c146,c484</imagelabel> 
</seclabel> 
<seclabel type='dynamic' model='dac' relabel='yes'> 
<label>+107:+107</label> 
<imagelabel>+107:+107</imagelabel> 
</seclabel> 
<qemu:capabilities> 
<qemu:add capability='blockdev'/> 
<qemu:add capability='incremental-backup'/> 
</qemu:capabilities> 
</domain> 

Thanks 
José 


De: "Jean-Louis Dupond" [ mailto:jean-lo...@dupond.be | <jean-lo...@dupond.be> 
] 
Para: "suporte" [ mailto:supo...@logicworks.pt | <supo...@logicworks.pt> ] 
Cc: "Colin Coe" [ mailto:colin....@gmail.com | <colin....@gmail.com> ] , 
"users" [ mailto:users@ovirt.org | <users@ovirt.org> ] 
Itens enviados: Terça-feira, 5 de Novembro de 2024 11:04:12 
Assunto: Re: [ovirt-users] Re: VM has been paused due to no Storage space 
error. 



Hi, 

The information is really unclear. 
In previous emails you state you use preallocated disks, now you tell us it 
only happens on thin provisioned disks. 
Also the VM disk was 300GB, and now its 80GB. 

How did you check the LV size? Can you show me the command + output. 
What does: vdsm-tool dump-volume-chains storage-id give for the disk that has 
the issue? The VM has 1 or more disks? 
Can you also do a virsh --readonly dumpxml <vm> on the hypervisor where the vm 
runs and give the output? 

Thanks 
Jean-Louis 
On 5/11/2024 11:55, [ mailto:supo...@logicworks.pt | supo...@logicworks.pt ] 
wrote: 

BQ_BEGIN

Hi, 

Can't find any error on vdsm log. What should I look for? 
The size of the Logical Volume for that disk is 80GB 
We have plenty free space. 
The VM cannot start again. it only happens with thin provision disks. If I move 
the disk of the paused VM to another storage it works for a day and than it 
pauses again. 

Thanks 
José 


De: "Jean-Louis Dupond" [ mailto:jean-lo...@dupond.be | <jean-lo...@dupond.be> 
] 
Para: "suporte" [ mailto:supo...@logicworks.pt | <supo...@logicworks.pt> ] , 
"Colin Coe" [ mailto:colin....@gmail.com | <colin....@gmail.com> ] 
Cc: "users" [ mailto:users@ovirt.org | <users@ovirt.org> ] 
Itens enviados: Segunda-feira, 4 de Novembro de 2024 12:14:30 
Assunto: Re: [ovirt-users] Re: VM has been paused due to no Storage space 
error. 



Hi, 

I think you need to dig deeper into the issue. 
- What does vdsm logs show for example? 
- What is the size of the Logical Volume for that disk? 
- The Volume Group (checked on that hypervisor) still gives free space? 
- There is no way to start the VM again without getting it paused? 

Jean-Louis 
On 4/11/2024 13:09, [ mailto:supo...@logicworks.pt | supo...@logicworks.pt ] 
wrote: 

BQ_BEGIN

But we are in Version 4.5.4-1.el8, so still don't know. 


From: "suporte" [ mailto:supo...@logicworks.pt | <supo...@logicworks.pt> ] 
To: "Colin Coe" [ mailto:colin....@gmail.com | <colin....@gmail.com> ] 
Cc: "Jean-Louis Dupond" [ mailto:jean-lo...@dupond.be | <jean-lo...@dupond.be> 
] , "users" [ mailto:users@ovirt.org | <users@ovirt.org> ] 
Sent: Monday, November 4, 2024 12:08:11 PM 
Subject: Re: [ovirt-users] Re: VM has been paused due to no Storage space 
error. 

if I understand correctly, upgrading will solve the issue! 


From: "Colin Coe" [ mailto:colin....@gmail.com | <colin....@gmail.com> ] 
To: "suporte" [ mailto:supo...@logicworks.pt | <supo...@logicworks.pt> ] 
Cc: "Jean-Louis Dupond" [ mailto:jean-lo...@dupond.be | <jean-lo...@dupond.be> 
] , "users" [ mailto:users@ovirt.org | <users@ovirt.org> ] 
Sent: Monday, November 4, 2024 12:04:19 PM 
Subject: Re: [ovirt-users] Re: VM has been paused due to no Storage space 
error. 

I don't think this article will help you. It specifically talks about upgrading 
to RHV 4.4SP1 (which is oVirt 4.5). There is a config workaround but it says 
don't do this in RHV 4.4SP1. 


On Mon, 4 Nov 2024 at 19:30, José Ferradeira via Users < [ 
mailto:users@ovirt.org | users@ovirt.org ] > wrote: 

BQ_BEGIN

Hi, 

I found this: [ https://access.redhat.com/solutions/130843 | 
https://access.redhat.com/solutions/130843 ] 
But have no access. 

Any help? 

Thanks 
José 


From: "José Ferradeira via Users" < [ mailto:users@ovirt.org | users@ovirt.org 
] > 
To: "Jean-Louis Dupond" < [ mailto:jean-lo...@dupond.be | jean-lo...@dupond.be 
] > 
Cc: "users" < [ mailto:users@ovirt.org | users@ovirt.org ] > 
Sent: Wednesday, August 21, 2024 11:27:47 AM 
Subject: [ovirt-users] Re: VM has been paused due to no Storage space error. 

Hi, 
Ok, discard is not enabled on the disk. 
Reading [ https://gitlab.com/qemu-project/qemu/-/issues/1621 | 
https://gitlab.com/qemu-project/qemu/-/issues/1621 ] 

I'm sure the image did not grow over 110% of its virtual size 

Any idea? 

Thanks 
José 


From: "Jean-Louis Dupond" < [ mailto:jean-lo...@dupond.be | 
jean-lo...@dupond.be ] > 
To: [ mailto:supo...@logicworks.pt | supo...@logicworks.pt ] 
Cc: "users" < [ mailto:users@ovirt.org | users@ovirt.org ] > 
Sent: Wednesday, August 21, 2024 10:17:22 AM 
Subject: Re: [ovirt-users] VM has been paused due to no Storage space error. 



Hi, 

Not talking about discard on the storage domain, but on disk/vm level. 
Is it enabled there? 

Cause if that is the cause, it could be [ 
https://gitlab.com/qemu-project/qemu/-/issues/1621 | 
https://gitlab.com/qemu-project/qemu/-/issues/1621 ] 
On 21/08/2024 11:14, [ mailto:supo...@logicworks.pt | supo...@logicworks.pt ] 
wrote: 

BQ_BEGIN

Hi Jean-Louis, 

The VM disk is 300GB with 216GB available, preallocated on a iSCSI storage with 
3532GB total and 1932GB free. 
Have enable incremental backup. Discard After Delete is not enabled. 

Thanks 
José 


From: "Jean-Louis Dupond" [ mailto:jean-lo...@dupond.be | 
<jean-lo...@dupond.be> ] 
To: [ mailto:supo...@logicworks.pt | supo...@logicworks.pt ] , "users" [ 
mailto:users@ovirt.org | <users@ovirt.org> ] 
Sent: Wednesday, August 21, 2024 9:12:53 AM 
Subject: Re: [ovirt-users] VM has been paused due to no Storage space error. 



Hi Jose, 

How big are the disks? And what size are they on the storage? 
I guess the images are qcow2? 

Is discard enabled? 

Jean-Louis 
On 20/08/2024 13:42, José Ferradeira via Users wrote: 

BQ_BEGIN

Hello, 

running ovirt Version 4.5.4-1.el8 on Centos 8, randomly we have this error: 
VM has been paused due to no Storage space error. 
We have plenty of space on the iSCSI storage. This is a preallocated disk, 
VirtIO-SCSi. 
No user interaction. It happens, so far, with 3 VM, Windows and Ubuntu. 
This service was stopped: dnf-makecache.service 

This is what I found on the engine log: 

2024-08-19 01:04:35,522+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-25) [eb7e5f1] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Up' --> 'Paused' 
2024-08-19 01:04:35,665+01 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ForkJoinPool-1-worker-25) [eb7e5f1] EVENT_ID: VM_PAUSED_ENOSPC(138), VM Bravo 
has been paused due to no Storage space error. 
2024-08-19 09:26:35,855+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-29) [72482216] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Paused' --> 'Down' 
2024-08-19 09:26:48,114+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-15) [72482216] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'WaitForLaunch' --> 
'PoweringUp' 
2024-08-19 09:27:50,062+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-6) [] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringUp' --> 'Up' 
2024-08-19 09:29:25,145+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-15) [72482216] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Up' --> 'Paused' 
2024-08-19 09:29:25,273+01 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ForkJoinPool-1-worker-15) [72482216] EVENT_ID: VM_PAUSED_ENOSPC(138), VM Bravo 
has been paused due to no Storage space error. 
2024-08-19 09:37:26,128+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-15) [6d88f065] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Paused' --> 'Down' 
2024-08-19 09:41:43,300+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-15) [6d88f065] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'WaitForLaunch' --> 
'PoweringUp' 
2024-08-19 09:42:14,882+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-23) [6d88f065] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringUp' --> 'Up' 
2024-08-19 09:42:59,792+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-15) [6d88f065] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Up' --> 'Paused' 
2024-08-19 09:42:59,894+01 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ForkJoinPool-1-worker-15) [6d88f065] EVENT_ID: VM_PAUSED_ENOSPC(138), VM Bravo 
has been paused due to no Storage space error. 
2024-08-19 09:45:30,334+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-15) [6b3d8ee] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Paused' --> 'Down' 
2024-08-19 09:47:51,068+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-15) [6b3d8ee] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'WaitForLaunch' --> 
'PoweringUp' 
2024-08-19 09:48:50,710+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringUp' --> 'Up' 
2024-08-19 10:06:38,810+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-15) [1dd98021] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringDown' --> 
'Down' 
2024-08-19 10:08:11,606+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-15) [1dd98021] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'WaitForLaunch' --> 
'PoweringUp' 
2024-08-19 10:09:12,507+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-25) [] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringUp' --> 'Up' 
2024-08-19 10:21:13,835+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-15) [63fa2421] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Up' --> 'Down' 
2024-08-19 10:25:19,302+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-15) [63fa2421] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'WaitForLaunch' --> 
'PoweringUp' 
2024-08-19 10:26:05,456+01 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-3) [63fa2421] VM 
'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringUp' --> 'Up' 

And we cannot start the VM anymore. 

Any idea? 
Thanks 

-- 

Jose Ferradeira 
[ http://www.logicworks.pt/ | http://www.logicworks.pt ] 

_______________________________________________
Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To 
unsubscribe send an email to [ mailto:users-le...@ovirt.org | 
users-le...@ovirt.org ] Privacy Statement: [ 
https://www.ovirt.org/privacy-policy.html | 
https://www.ovirt.org/privacy-policy.html ] oVirt Code of Conduct: [ 
https://www.ovirt.org/community/about/community-guidelines/ | 
https://www.ovirt.org/community/about/community-guidelines/ ] List Archives: [ 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YKELJXHKYLTL5MJWKEOW3IFUMDDH4FHV/
 | 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YKELJXHKYLTL5MJWKEOW3IFUMDDH4FHV/
 ] 




BQ_END


_______________________________________________ 
Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] 
To unsubscribe send an email to [ mailto:users-le...@ovirt.org | 
users-le...@ovirt.org ] 
Privacy Statement: [ https://www.ovirt.org/privacy-policy.html | 
https://www.ovirt.org/privacy-policy.html ] 
oVirt Code of Conduct: [ 
https://www.ovirt.org/community/about/community-guidelines/ | 
https://www.ovirt.org/community/about/community-guidelines/ ] 
List Archives: [ 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OON4G6VUC6Z5LFUAAK7AF5RA3KC43NU7/
 | 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OON4G6VUC6Z5LFUAAK7AF5RA3KC43NU7/
 ] 
_______________________________________________ 
Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] 
To unsubscribe send an email to [ mailto:users-le...@ovirt.org | 
users-le...@ovirt.org ] 
Privacy Statement: [ https://www.ovirt.org/privacy-policy.html | 
https://www.ovirt.org/privacy-policy.html ] 
oVirt Code of Conduct: [ 
https://www.ovirt.org/community/about/community-guidelines/ | 
https://www.ovirt.org/community/about/community-guidelines/ ] 
List Archives: [ 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWZVVV4TEB6DE3XGXUSOCY5L2HTMJ2OL/
 | 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWZVVV4TEB6DE3XGXUSOCY5L2HTMJ2OL/
 ] 

BQ_END



BQ_END


BQ_END


BQ_END

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CEWQTPMQ7CIAWAOFULBMOTNNZQ6GHNWG/

Reply via email to