Hi,
I was asked about a very similar case and needed to debug it.
So I thought I give the issue reported here a try how it looks like today.
virt install creates a guest with a command like:
-drive
file=/dev/LVMpool/test-snapshot-virtinst,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
## Solved - confusion of pre-existing other apparmor rules as pools are
unsupported by libvirt/apparmor ##
In this case virt-inst pre-creates a VG of a given size and passes the guest
just that.
This is the different to using the actual pool feature.
With that I'm "ok" that it doesn't need a special apparmor rule already.
>From the guest/apparmor point of view when the guest starts the path is known
>and added to the guests profile.
(With a pool ref in the guest that would not have worked)
## experiments - setup ##
Lets define a guest which has a qcow and a lvm disk that we can snapshot for
experiments.
We will use the disk created in the test above, but in a uvtool guest to get
rid of all virt-install special quirks.
The other disk is just a qcow file.
$ sudo qemu-img create -f qcow2
/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow 1G
Formatting '/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow', fmt=qcow2
size=1073741824 cluster_size=65536 lazy_refcounts=off refcount_bits=16
The config for those looks like:
qcow:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow'/>
<target dev='vdc' bus='virtio'/>
</disk>
CMD: -drive
file=/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow,format=qcow2,if=none,id=drive-virtio-disk2
apparmor: "/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow" rwk,
disk:
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/LVMpool/test-snapshot-virtinst'/>
<target dev='vdd' bus='virtio'/>
</disk>
CMD: -drive
file=/dev/LVMpool/test-snapshot-virtinst,format=raw,if=none,id=drive-virtio-disk3
apparmor: "/dev/dm-11" rwk,
which is a match as
$ ll /dev/LVMpool/test-snapshot-virtinst
lrwxrwxrwx 1 root root 8 Sep 11 05:14 /dev/LVMpool/test-snapshot-virtinst ->
../dm-11
## experiments - snapshotting ##
Details of the spec see: https://libvirt.org/formatdomain.html
Snapshot of just the qcow file:
$ virsh snapshot-create-as --print-xml --domain eoan-snapshot --disk-only
--atomic --diskspec vda,snapshot=no --diskspec vdb,snapshot=no --diskspec
vdc,file=/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow,snapshot=external
--diskspec vdd,snapshot=no
$ virsh snapshot-list eoan-snapshot
Name Creation Time State
------------------------------------------------------------
1568196836 2019-09-11 06:13:56 -0400 disk-snapshot
The snapshot got added to the apparmor profile:
"/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow" rwk,
The position shows that this was done with the "append" feature of
virt-aa-helper.
So it did not re-parse the guest but just add one more entry (as it would do on
hotplug).
I'm not trying to LVM-snapshot as that seems not what was asked for.
And further LVM would have own capabilties to do so.
## check status after snapshot ##
The guest now has the new snapshot as main file and the old one as backing file
(COW-chain)
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source
file='/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow'/>
<backingStore type='file' index='1'>
<format type='qcow2'/>
<source file='/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow'/>
<backingStore/>
</backingStore>
<target dev='vdc' bus='virtio'/>
<alias name='virtio-disk2'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0004'/>
</disk>
Please do mind that this is the "runtime view", once shut down you'll only see
the new snapshot.
This is confirmed by the metadata in the qcow file.
$ sudo qemu-img info /var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow
image: /var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 196K
cluster_size: 65536
backing file: /var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
## restart guest ##
XML of inactive guest as expected:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source
file='/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow'/>
<target dev='vdc' bus='virtio'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0004'/>
</disk>
The guest starts just fine (with still all 4 disk entries):
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/uvtool/libvirt/images/eoan-snapshot.qcow'/>
<backingStore type='file' index='1'>
<format type='qcow2'/>
<source
file='/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTkuMTA6czM5MHggMjAxOTA5MDY='/>
<backingStore/>
</backingStore>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/uvtool/libvirt/images/eoan-snapshot-ds.qcow'/>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0001'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source
file='/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow'/>
<backingStore type='file' index='1'>
<format type='qcow2'/>
<source file='/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow'/>
<backingStore/>
</backingStore>
<target dev='vdc' bus='virtio'/>
<alias name='virtio-disk2'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0004'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/LVMpool/test-snapshot-virtinst'/>
<backingStore/>
<target dev='vdd' bus='virtio'/>
<alias name='virtio-disk3'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0005'/>
</disk
The apparmor includes the backing chains:
"/var/lib/uvtool/libvirt/images/eoan-snapshot.qcow" rwk,
"/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTkuMTA6czM5MHggMjAxOTA5MDY="
rk,
"/var/lib/uvtool/libvirt/images/eoan-snapshot-ds.qcow" rwk,
"/var/lib/libvirt/images/eoan-snapshot-test.thesnapshot.qcow" rwk,
"/var/lib/uvtool/libvirt/images/eoan-snapshot-test.qcow" rk,
"/dev/dm-11" rwk,
All works just fine even through restart nowadays.
## LVM snapshots ##
While at it lets double check LVM snapshots as that is often used as
well and I was asked about them.
I use this as vdd still:
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/LVMpool/test-snapshot-virtinst'/>
<backingStore/>
<target dev='vdd' bus='virtio'/>
<alias name='virtio-disk3'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0005'/>
</disk
Running an LVM snapshot is like:
$ sudo lvcreate -L1G -s -n /dev/LVMpool/test-snapshot-virtinst-snap
/dev/LVMpool/test-snapshot-virtinst
Using default stripesize 64.00 KiB.
Logical volume "test-snapshot-virtinst-snap" created.
Which gives me:
$ sudo lvdisplay /dev/LVMpool
--- Logical volume ---
LV Path /dev/LVMpool/test-snapshot-virtinst
LV Name test-snapshot-virtinst
VG Name LVMpool
LV UUID H0FfqR-619v-KAeJ-GhXU-p9K5-dns8-fP8NGX
LV Write Access read/write
LV Creation host, time s1lp05, 2019-09-11 05:14:52 -0400
LV snapshot status source of
test-snapshot-virtinst-snap [active]
LV Status available
# open 2
LV Size 1.00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:11
--- Logical volume ---
LV Path /dev/LVMpool/test-snapshot-virtinst-snap
LV Name test-snapshot-virtinst-snap
VG Name LVMpool
LV UUID hTPlXa-o3vj-yjrE-Igse-H7K7-NimG-ipTdoH
LV Write Access read/write
LV Creation host, time s1lp05, 2019-09-11 06:44:05 -0400
LV snapshot status active destination for test-snapshot-virtinst
LV Status available
# open 0
LV Size 1.00 GiB
Current LE 256
COW-table size 1.00 GiB
COW-table LE 256
Allocated to snapshot 0.00%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:14
Writing into that from the guest makes it change.
sudo lvs
LV VG Attr LSize Pool Origin
Data% Meta% Move Log Cpy%Sync Convert
test-snapshot-virtinst LVMpool owi-aos--- 1.00g
test-snapshot-virtinst-snap LVMpool swi-a-s--- 1.00g
test-snapshot-virtinst 9.80
Since LVM keeps the old name the one that is continued to be written the same:
$ ll /dev/LVMpool/
lrwxrwxrwx 1 root root 8 Sep 11 06:44 test-snapshot-virtinst -> ../dm-11
lrwxrwxrwx 1 root root 8 Sep 11 06:44 test-snapshot-virtinst-snap ->
../dm-14
Still dm-11 that matches apparmor
"/dev/dm-11" rwk,
Checking if it is an issue to restart the guest with the LVM snapshot attached.
No, shutdown and start work fine still.
With that said I think we can close this old bug nowadays.
P.S. There is a case a friend of mine reported with qcow snapshots on LVM which
sounds odd and is broken.
This I'll track down in another place as it has nothing to do with the issue
that was reported here.
** Changed in: apparmor (Ubuntu)
Status: New => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to apparmor in Ubuntu.
https://bugs.launchpad.net/bugs/1525310
Title:
virsh with apparmor misconfigures libvirt-UUID files during snapshot
Status in apparmor package in Ubuntu:
Fix Released
Bug description:
Reproducible: Yes, every time.
Background:
When you create a virtual machine (VM) under KVM/Qemu in Ubuntu,
apparmor files are created as:
/etc/apparmor.d/libvirt/libvirt-<UUID>
and
/etc/apparmor.d/libvirt/libvirt-<UUID>.files
And in the file /etc/apparmor.d/libvirt/libvirt-<UUID>.files there is
the line
"PATH_to_BLOCK_DEVICE" rw,
where PATH_to_BLOCK_DEVICE is the full path name of the image. ( E.g.
something like /var/lib/libvirtd/images/asdf.qcow2)
and <UUID> is the UUID of the VM container.
The problem:
When creating a shapshot of a running VM under KVM/Qemu you run the
command
$ sudo virsh snapshot-create-as DOMAIN_NAME DESCRIPTION --no-
metadata --disk-only --atomic
which creates a new file and stops writing to the old VM block device.
However: the old PATH_to_BLOCK_DEVICE in /etc/apparmor.d/libvirt
/libvirt-UUID.files is deleted and replaced with the new block device
info BEFORE virsh is done creating the snapshot. So you get the error
error: internal error: unable to execute QEMU command 'transaction':
Could not open 'PATH_to_BLOCK_DEVICE': Could not open
'PATH_to_BLOCK_DEVICE': Permission denied: Permission denied
and in /var/log/syslog you get the error:
type=1400 audit(1449752104.054:539): apparmor="DENIED"
operation="open" profile="libvirt-<UUID>" name="PATH_to_BLOCK_DEVICE"
pid=8710 comm="qemu-system-x86" requested_mask="r" denied_mask="r"
fsuid=106 ouid=106
When you look now at /etc/apparmor.d/libvirt/libvirt-<UUID>.files you
find that the line that was there
"PATH_to_BLOCK_DEVICE" rw,
has been replaced with
"PATH_to_BLOCK_DEVICE.DESCRIPTION" rw,
but you need BOTH LINES. in order for the command "virsh snapshot-
create-as" to work. (or at least have the old file have read
permissions)
-----
Workarounds:
1. Disable apparmor for libvirtd
or
2. Change /etc/apparmor.d/libvirt/libvirt-<UUID> to look like this
----------
#
# This profile is for the domain whose UUID matches this file.
#
#include <tunables/global>
profile libvirt-UUID {
#include <abstractions/libvirt-qemu>
#include <libvirt/libvirt-UUID.files>
"PATH_to_BLOCK_DEVICE*" rw,
}
-----------
(
So if the old line was
"/var/lib/libvirtd/images/asdf.qcow2" rw,
, the line you can add would read something like this
"/var/lib/libvirtd/images/asdf*" rw,
)
--------
Details on server:
# lsb_release -rd
Description: Ubuntu 14.04.3 LTS
Release: 14.04
# apt-cache policy apparmor
apparmor:
Installed: 2.8.95~2430-0ubuntu5.3
Candidate: 2.8.95~2430-0ubuntu5.3
Version table:
*** 2.8.95~2430-0ubuntu5.3 0
500 http://us.archive.ubuntu.com/ubuntu/ trusty-updates/main amd64
Packages
100 /var/lib/dpkg/status
2.8.95~2430-0ubuntu5.1 0
500 http://security.ubuntu.com/ubuntu/ trusty-security/main amd64
Packages
2.8.95~2430-0ubuntu5 0
500 http://us.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
# apt-cache policy libvirt-bin
libvirt-bin:
Installed: 1.2.2-0ubuntu13.1.14
Candidate: 1.2.2-0ubuntu13.1.14
Version table:
*** 1.2.2-0ubuntu13.1.14 0
500 http://us.archive.ubuntu.com/ubuntu/ trusty-updates/main amd64
Packages
100 /var/lib/dpkg/status
1.2.2-0ubuntu13.1.7 0
500 http://security.u buntu.com/ubuntu/ trusty-security/main amd64
Packages
1.2.2-0ubuntu13 0
500 http://us.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
-----
Apologies if this is the wrong place to submit this bug.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1525310/+subscriptions
--
Mailing list: https://launchpad.net/~touch-packages
Post to : [email protected]
Unsubscribe : https://launchpad.net/~touch-packages
More help : https://help.launchpad.net/ListHelp