--- Begin Message ---
Package: release.debian.org
Severity: normal
Tags: bullseye
X-Debbugs-Cc: libv...@packages.debian.org
Control: affects -1 + src:libvirt
User: release.debian....@packages.debian.org
Usertags: pu
[ Reason ]
libvirt/7.0.0-3+deb11u2, currently in Bullseye, is vulnerable to several
no-DSA security issues. These issues have been fixed for Buster LTS
(DLA 3778-1), so this is a regression for users upgrading to Bullseye
from Buster.
[ Impact ]
Users are vulnerable to (no-dsa) security issues.
[ Tests ]
AFAICT the code is architectured in several orthogonal components and
drivers, and the DEP-8 smoke tests only cover some of them such as LXC
or QEMU/KVM.
The bulk of the proposed change touches the libxl driver (Xen) which is
not covered by DEP-8 tests, so I tested that one more carefully, but
also ran manual tests for KVM on a test hypervisor.
[ Risks ]
All patches are backported from upstream and applied cleanly (except for
the libxl-related ones).
[ Checklist ]
[x] *all* changes are documented in the d/changelog
[x] I reviewed all changes and I approve them
[x] attach debdiff against the package in oldstable
[x] the issue is verified as fixed in unstable
[ Changes ]
* Fix CVE-2021-3631: SELinux MCS may be accessed by another machine.
* Fix CVE-2021-3667: Improper locking in the
virStoragePoolLookupByTargetPath API.
* Fix CVE-2021-3975: Use-after-free vulnerability. The
qemuMonitorUnregister() function in qemuProcessHandleMonitorEOF is called
using multiple threads without being adequately protected by a monitor
lock.
* Fix CVE-2021-4147: Deadlock and crash in libxl driver.
* libxl: Fix regression in domain shutdown.
* Fix CVE-2022-0897: Missing locking in nwfilterConnectNumOfNWFilters.
* Fix CVE-2024-1441: Off-by-one error in the udevListInterfacesByStatus()
function.
* Fix CVE-2024-2494: Missing check for negative array lengths in RPC server
de-serialization routines.
* Fix CVE-2024-2496: NULL pointer dereference in the
udevConnectListAllInterfaces() function.
--
Guilhem.
diffstat for libvirt-7.0.0 libvirt-7.0.0
changelog | 24 +++
patches/CVE-2021-3631.patch | 49 ++++++
patches/CVE-2021-3667.patch | 36 +++++
patches/CVE-2021-3975.patch | 37 +++++
patches/CVE-2021-4147_1.patch | 111 +++++++++++++++
patches/CVE-2021-4147_2.patch | 68 +++++++++
patches/CVE-2021-4147_3.patch | 32 ++++
patches/CVE-2021-4147_4.patch | 145 ++++++++++++++++++++
patches/CVE-2021-4147_5.patch | 172 ++++++++++++++++++++++++
patches/CVE-2021-4147_6.patch | 90 ++++++++++++
patches/CVE-2022-0897.patch | 48 ++++++
patches/CVE-2024-1441.patch | 35 ++++
patches/CVE-2024-2494.patch | 212 ++++++++++++++++++++++++++++++
patches/CVE-2024-2496.patch | 86 ++++++++++++
patches/libxl-Fix-domain-shutdown.patch | 226 ++++++++++++++++++++++++++++++++
patches/series | 14 +
16 files changed, 1385 insertions(+)
diff -Nru libvirt-7.0.0/debian/changelog libvirt-7.0.0/debian/changelog
--- libvirt-7.0.0/debian/changelog 2023-02-06 17:50:14.000000000 +0100
+++ libvirt-7.0.0/debian/changelog 2024-07-30 21:35:28.000000000 +0200
@@ -1,3 +1,27 @@
+libvirt (7.0.0-3+deb11u3) bullseye; urgency=medium
+
+ * Non-maintainer upload.
+ * Fix CVE-2021-3631: SELinux MCS may be accessed by another machine.
+ (Closes: #990709)
+ * Fix CVE-2021-3667: Improper locking in the
+ virStoragePoolLookupByTargetPath API. (Closes: #991594)
+ * Fix CVE-2021-3975: Use-after-free vulnerability. The
+ qemuMonitorUnregister() function in qemuProcessHandleMonitorEOF is called
+ using multiple threads without being adequately protected by a monitor
+ lock.
+ * Fix CVE-2021-4147: Deadlock and crash in libxl driver. (Closes: #1002535)
+ * libxl: Fix regression in domain shutdown.
+ * Fix CVE-2022-0897: Missing locking in nwfilterConnectNumOfNWFilters.
+ (Closes: #1009075)
+ * Fix CVE-2024-1441: Off-by-one error in the udevListInterfacesByStatus()
+ function. (Closes: #1066058)
+ * Fix CVE-2024-2494: Missing check for negative array lengths in RPC server
+ de-serialization routines. (Closes: #1067461)
+ * Fix CVE-2024-2496: NULL pointer dereference in the
+ udevConnectListAllInterfaces() function.
+
+ -- Guilhem Moulin <guil...@debian.org> Tue, 30 Jul 2024 21:35:28 +0200
+
libvirt (7.0.0-3+deb11u2) bullseye; urgency=medium
* [461d540] Fix libxl config test failures.
diff -Nru libvirt-7.0.0/debian/patches/CVE-2021-3631.patch
libvirt-7.0.0/debian/patches/CVE-2021-3631.patch
--- libvirt-7.0.0/debian/patches/CVE-2021-3631.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2021-3631.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,49 @@
+From: Daniel P. Berrangé <berra...@redhat.com>
+Date: Mon, 28 Jun 2021 13:09:04 +0100
+Subject: security: fix SELinux label generation logic
+
+A process can access a file if the set of MCS categories
+for the file is equal-to *or* a subset-of, the set of
+MCS categories for the process.
+
+If there are two VMs:
+
+ a) svirt_t:s0:c117
+ b) svirt_t:s0:c117,c720
+
+Then VM (b) is able to access files labelled for VM (a).
+
+IOW, we must discard case where the categories are equal
+because that is a subset of many other valid category pairs.
+
+Reviewed-by: Peter Krempa <pkre...@redhat.com>
+Signed-off-by: Daniel P. Berrangé <berra...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/15073504dbb624d3f6c911e85557019d3620fdb2
+Bug: https://gitlab.com/libvirt/libvirt/-/issues/153
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2021-3631
+Bug-Debian: https://bugs.debian.org/990709
+---
+ src/security/security_selinux.c | 10 +++++++++-
+ 1 file changed, 9 insertions(+), 1 deletion(-)
+
+diff --git a/src/security/security_selinux.c b/src/security/security_selinux.c
+index 2fc6ef2..61a871e 100644
+--- a/src/security/security_selinux.c
++++ b/src/security/security_selinux.c
+@@ -389,7 +389,15 @@ virSecuritySELinuxMCSFind(virSecurityManagerPtr mgr,
+ VIR_DEBUG("Try cat %s:c%d,c%d", sens, c1 + catMin, c2 + catMin);
+
+ if (c1 == c2) {
+- mcs = g_strdup_printf("%s:c%d", sens, catMin + c1);
++ /*
++ * A process can access a file if the set of MCS categories
++ * for the file is equal-to *or* a subset-of, the set of
++ * MCS categories for the process.
++ *
++ * IOW, we must discard case where the categories are equal
++ * because that is a subset of other category pairs.
++ */
++ continue;
+ } else {
+ if (c1 > c2) {
+ int t = c1;
diff -Nru libvirt-7.0.0/debian/patches/CVE-2021-3667.patch
libvirt-7.0.0/debian/patches/CVE-2021-3667.patch
--- libvirt-7.0.0/debian/patches/CVE-2021-3667.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2021-3667.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,36 @@
+From: Peter Krempa <pkre...@redhat.com>
+Date: Wed, 21 Jul 2021 11:22:25 +0200
+Subject: storage_driver: Unlock object on ACL fail in
+ storagePoolLookupByTargetPath
+
+'virStoragePoolObjListSearch' returns a locked and refed object, thus we
+must release it on ACL permission failure.
+
+Fixes: 7aa0e8c0cb8
+Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1984318
+Signed-off-by: Peter Krempa <pkre...@redhat.com>
+Reviewed-by: Michal Privoznik <mpriv...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/447f69dec47e1b0bd15ecd7cd49a9fd3b050fb87
+Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1984318
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2021-3667
+Bug-Debian: https://bugs.debian.org/991594
+---
+ src/storage/storage_driver.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c
+index 16bc53a..2787c16 100644
+--- a/src/storage/storage_driver.c
++++ b/src/storage/storage_driver.c
+@@ -1739,8 +1739,10 @@ storagePoolLookupByTargetPath(virConnectPtr conn,
+
storagePoolLookupByTargetPathCallback,
+ cleanpath))) {
+ def = virStoragePoolObjGetDef(obj);
+- if (virStoragePoolLookupByTargetPathEnsureACL(conn, def) < 0)
++ if (virStoragePoolLookupByTargetPathEnsureACL(conn, def) < 0) {
++ virStoragePoolObjEndAPI(&obj);
+ return NULL;
++ }
+
+ pool = virGetStoragePool(conn, def->name, def->uuid, NULL, NULL);
+ virStoragePoolObjEndAPI(&obj);
diff -Nru libvirt-7.0.0/debian/patches/CVE-2021-3975.patch
libvirt-7.0.0/debian/patches/CVE-2021-3975.patch
--- libvirt-7.0.0/debian/patches/CVE-2021-3975.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2021-3975.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,37 @@
+From: Peng Liang <liangpen...@huawei.com>
+Date: Wed, 24 Feb 2021 19:28:23 +0800
+Subject: qemu: Add missing lock in qemuProcessHandleMonitorEOF
+
+qemuMonitorUnregister will be called in multiple threads (e.g. threads
+in rpc worker pool and the vm event thread). In some cases, it isn't
+protected by the monitor lock, which may lead to call g_source_unref
+more than one time and a use-after-free problem eventually.
+
+Add the missing lock in qemuProcessHandleMonitorEOF (which is the only
+position missing lock of monitor I found).
+
+Suggested-by: Michal Privoznik <mpriv...@redhat.com>
+Signed-off-by: Peng Liang <liangpen...@huawei.com>
+Signed-off-by: Michal Privoznik <mpriv...@redhat.com>
+Reviewed-by: Michal Privoznik <mpriv...@redhat.com>
+Origin:
https://github.com/libvirt/libvirt/commit/1ac703a7d0789e46833f4013a3876c2e3af18ec7
+Bug: https://bugzilla.redhat.com/show_bug.cgi?id=2024326
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2021-3975
+---
+ src/qemu/qemu_process.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
+index 202d867..3f7355f 100644
+--- a/src/qemu/qemu_process.c
++++ b/src/qemu/qemu_process.c
+@@ -317,7 +317,9 @@ qemuProcessHandleMonitorEOF(qemuMonitorPtr mon,
+ /* We don't want this EOF handler to be called over and over while the
+ * thread is waiting for a job.
+ */
++ virObjectLock(mon);
+ qemuMonitorUnregister(mon);
++ virObjectUnlock(mon);
+
+ /* We don't want any cleanup from EOF handler (or any other
+ * thread) to enter qemu namespace. */
diff -Nru libvirt-7.0.0/debian/patches/CVE-2021-4147_1.patch
libvirt-7.0.0/debian/patches/CVE-2021-4147_1.patch
--- libvirt-7.0.0/debian/patches/CVE-2021-4147_1.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2021-4147_1.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,111 @@
+From: Jim Fehlig <jfeh...@suse.com>
+Date: Fri, 29 Oct 2021 14:16:33 -0600
+Subject: libxl: Disable death events after receiving a shutdown event
+
+The libxl driver will handle all domain destruction and cleanup
+when receiving a domain shutdown event from libxl. Commit fa30ee04a2a
+introduced the ignoreDeathEvent boolean in the DomainObjPrivate struct
+to ignore subsequent death events from libxl. But libxl already provides
+a mechanism to disable death events via libxl_evdisable_domain_death.
+
+This patch partially reverts commit fa30ee04a2a and instead uses
+libxl_evdisable_domain_death to disable subsequent death events when
+processing a shutdown event.
+
+Signed-off-by: Jim Fehlig <jfeh...@suse.com>
+Reviewed-by: Daniel P. Berrangé <berra...@redhat.com>
+Reviewed-by: Ján Tomko <jto...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/23b51d7b8ec885e97a9277cf0a6c2833db4636e8
+Bug: https://bugzilla.redhat.com/show_bug.cgi?id=2034195
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2021-4147
+Bug-Debian: https://bugs.debian.org/1002535
+---
+ src/libxl/libxl_domain.c | 23 +++++------------------
+ src/libxl/libxl_domain.h | 3 ---
+ 2 files changed, 5 insertions(+), 21 deletions(-)
+
+diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
+index 63938d5..f97c6da 100644
+--- a/src/libxl/libxl_domain.c
++++ b/src/libxl/libxl_domain.c
+@@ -614,12 +614,6 @@ static void
+ libxlDomainHandleDeath(libxlDriverPrivatePtr driver, virDomainObjPtr vm)
+ {
+ virObjectEventPtr dom_event = NULL;
+- libxlDomainObjPrivatePtr priv = vm->privateData;
+-
+- if (priv->ignoreDeathEvent) {
+- priv->ignoreDeathEvent = false;
+- return;
+- }
+
+ if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0)
+ return;
+@@ -667,7 +661,6 @@ libxlDomainEventHandler(void *data, VIR_LIBXL_EVENT_CONST
libxl_event *event)
+ }
+
+ if (event->type == LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
+- libxlDomainObjPrivatePtr priv = vm->privateData;
+ struct libxlShutdownThreadInfo *shutdown_info = NULL;
+ virThread thread;
+ g_autofree char *name = NULL;
+@@ -684,12 +677,9 @@ libxlDomainEventHandler(void *data, VIR_LIBXL_EVENT_CONST
libxl_event *event)
+ name = g_strdup_printf("ev-%d", event->domid);
+ /*
+ * Cleanup will be handled by the shutdown thread.
+- * Ignore the forthcoming death event from libxl
+ */
+- priv->ignoreDeathEvent = true;
+ if (virThreadCreateFull(&thread, false, libxlDomainShutdownThread,
+ name, false, shutdown_info) < 0) {
+- priv->ignoreDeathEvent = false;
+ /*
+ * Not much we can do on error here except log it.
+ */
+@@ -813,18 +803,17 @@ libxlDomainDestroyInternal(libxlDriverPrivatePtr driver,
+ libxlDomainObjPrivatePtr priv = vm->privateData;
+ int ret = -1;
+
+- /* Ignore next LIBXL_EVENT_TYPE_DOMAIN_DEATH as the caller will handle
+- * domain death appropriately already (having more info, like the reason).
+- */
+- priv->ignoreDeathEvent = true;
++ if (priv->deathW) {
++ libxl_evdisable_domain_death(cfg->ctx, priv->deathW);
++ priv->deathW = NULL;
++ }
++
+ /* Unlock virDomainObj during destroy, which can take considerable
+ * time on large memory domains.
+ */
+ virObjectUnlock(vm);
+ ret = libxl_domain_destroy(cfg->ctx, vm->def->id, NULL);
+ virObjectLock(vm);
+- if (ret)
+- priv->ignoreDeathEvent = false;
+
+ return ret;
+ }
+@@ -877,8 +866,6 @@ libxlDomainCleanup(libxlDriverPrivatePtr driver,
+ priv->deathW = NULL;
+ }
+
+- priv->ignoreDeathEvent = false;
+-
+ if (!!g_atomic_int_dec_and_test(&driver->nactive) &&
driver->inhibitCallback)
+ driver->inhibitCallback(false, driver->inhibitOpaque);
+
+diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h
+index 0068254..e06a88b 100644
+--- a/src/libxl/libxl_domain.h
++++ b/src/libxl/libxl_domain.h
+@@ -62,9 +62,6 @@ struct _libxlDomainObjPrivate {
+ /* console */
+ virChrdevsPtr devs;
+ libxl_evgen_domain_death *deathW;
+- /* Flag to indicate the upcoming LIBXL_EVENT_TYPE_DOMAIN_DEATH is caused
+- * by libvirt and should not be handled separately */
+- bool ignoreDeathEvent;
+ virThreadPtr migrationDstReceiveThr;
+ unsigned short migrationPort;
+ char *lockState;
diff -Nru libvirt-7.0.0/debian/patches/CVE-2021-4147_2.patch
libvirt-7.0.0/debian/patches/CVE-2021-4147_2.patch
--- libvirt-7.0.0/debian/patches/CVE-2021-4147_2.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2021-4147_2.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,68 @@
+From: Jim Fehlig <jfeh...@suse.com>
+Date: Wed, 24 Nov 2021 11:10:19 -0700
+Subject: libxl: Rename libxlShutdownThreadInfo struct
+
+An upcoming change will use the struct in a thread created to process
+death events. Rename libxlShutdownThreadInfo to libxlEventHandlerThreadInfo
+to reflect the more generic usage.
+
+Signed-off-by: Jim Fehlig <jfeh...@suse.com>
+Reviewed-by: Daniel P. Berrangé <berra...@redhat.com>
+Reviewed-by: Ján Tomko <jto...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/a4e6fba069c0809b8b5dde5e9db62d2efd91b4a0
+Bug: https://bugzilla.redhat.com/show_bug.cgi?id=2034195
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2021-4147
+Bug-Debian: https://bugs.debian.org/1002535
+---
+ src/libxl/libxl_domain.c | 10 +++++-----
+ 1 file changed, 5 insertions(+), 5 deletions(-)
+
+diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
+index f97c6da..6ad9ab7 100644
+--- a/src/libxl/libxl_domain.c
++++ b/src/libxl/libxl_domain.c
+@@ -473,7 +473,7 @@ libxlDomainShutdownHandleRestart(libxlDriverPrivatePtr
driver,
+ }
+
+
+-struct libxlShutdownThreadInfo
++struct libxlEventHandlerThreadInfo
+ {
+ libxlDriverPrivatePtr driver;
+ virDomainObjPtr vm;
+@@ -484,7 +484,7 @@ struct libxlShutdownThreadInfo
+ static void
+ libxlDomainShutdownThread(void *opaque)
+ {
+- struct libxlShutdownThreadInfo *shutdown_info = opaque;
++ struct libxlEventHandlerThreadInfo *shutdown_info = opaque;
+ virDomainObjPtr vm = shutdown_info->vm;
+ libxl_event *ev = shutdown_info->event;
+ libxlDriverPrivatePtr driver = shutdown_info->driver;
+@@ -661,7 +661,7 @@ libxlDomainEventHandler(void *data, VIR_LIBXL_EVENT_CONST
libxl_event *event)
+ }
+
+ if (event->type == LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
+- struct libxlShutdownThreadInfo *shutdown_info = NULL;
++ struct libxlEventHandlerThreadInfo *shutdown_info = NULL;
+ virThread thread;
+ g_autofree char *name = NULL;
+
+@@ -669,7 +669,7 @@ libxlDomainEventHandler(void *data, VIR_LIBXL_EVENT_CONST
libxl_event *event)
+ * Start a thread to handle shutdown. We don't want to be tying up
+ * libxl's event machinery by doing a potentially lengthy shutdown.
+ */
+- shutdown_info = g_new0(struct libxlShutdownThreadInfo, 1);
++ shutdown_info = g_new0(struct libxlEventHandlerThreadInfo, 1);
+
+ shutdown_info->driver = driver;
+ shutdown_info->vm = vm;
+@@ -689,7 +689,7 @@ libxlDomainEventHandler(void *data, VIR_LIBXL_EVENT_CONST
libxl_event *event)
+ }
+ /*
+ * virDomainObjEndAPI is called in the shutdown thread, where
+- * libxlShutdownThreadInfo and libxl_event are also freed.
++ * libxlEventHandlerThreadInfo and libxl_event are also freed.
+ */
+ return;
+ } else if (event->type == LIBXL_EVENT_TYPE_DOMAIN_DEATH) {
diff -Nru libvirt-7.0.0/debian/patches/CVE-2021-4147_3.patch
libvirt-7.0.0/debian/patches/CVE-2021-4147_3.patch
--- libvirt-7.0.0/debian/patches/CVE-2021-4147_3.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2021-4147_3.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,32 @@
+From: Jim Fehlig <jfeh...@suse.com>
+Date: Wed, 24 Nov 2021 11:16:38 -0700
+Subject: libxl: Modify name of shutdown thread
+
+The current thread name 'ev-<domid>' is a bit terse. Change the name
+to 'shutdown-event-<domid>', allowing it to be distinguished between
+thread handling other event types.
+
+Signed-off-by: Jim Fehlig <jfeh...@suse.com>
+Reviewed-by: Daniel P. Berrangé <berra...@redhat.com>
+Reviewed-by: Ján Tomko <jto...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/e4f7589a3ec285489618ca04c8c0230cc31f3d99
+Bug: https://bugzilla.redhat.com/show_bug.cgi?id=2034195
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2021-4147
+Bug-Debian: https://bugs.debian.org/1002535
+---
+ src/libxl/libxl_domain.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
+index 6ad9ab7..2af9d31 100644
+--- a/src/libxl/libxl_domain.c
++++ b/src/libxl/libxl_domain.c
+@@ -674,7 +674,7 @@ libxlDomainEventHandler(void *data, VIR_LIBXL_EVENT_CONST
libxl_event *event)
+ shutdown_info->driver = driver;
+ shutdown_info->vm = vm;
+ shutdown_info->event = (libxl_event *)event;
+- name = g_strdup_printf("ev-%d", event->domid);
++ name = g_strdup_printf("shutdown-event-%d", event->domid);
+ /*
+ * Cleanup will be handled by the shutdown thread.
+ */
diff -Nru libvirt-7.0.0/debian/patches/CVE-2021-4147_4.patch
libvirt-7.0.0/debian/patches/CVE-2021-4147_4.patch
--- libvirt-7.0.0/debian/patches/CVE-2021-4147_4.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2021-4147_4.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,145 @@
+From: Jim Fehlig <jfeh...@suse.com>
+Date: Wed, 24 Nov 2021 11:36:55 -0700
+Subject: libxl: Handle domain death events in a thread
+
+Similar to domain shutdown events, processing domain death events can be a
+lengthy process and we don't want to block the event handler while the
+operation completes. Move the death handling function to a thread.
+
+Signed-off-by: Jim Fehlig <jfeh...@suse.com>
+Reviewed-by: Daniel P. Berrangé <berra...@redhat.com>
+Reviewed-by: Ján Tomko <jto...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/b9a5faea49b7412e26d7389af4c32fc2b3ee80e5
+Bug: https://bugzilla.redhat.com/show_bug.cgi?id=2034195
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2021-4147
+Bug-Debian: https://bugs.debian.org/1002535
+---
+ src/libxl/libxl_domain.c | 67 +++++++++++++++++++++++++++++++++---------------
+ 1 file changed, 47 insertions(+), 20 deletions(-)
+
+diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
+index 2af9d31..f541469 100644
+--- a/src/libxl/libxl_domain.c
++++ b/src/libxl/libxl_domain.c
+@@ -611,12 +611,17 @@ libxlDomainShutdownThread(void *opaque)
+ }
+
+ static void
+-libxlDomainHandleDeath(libxlDriverPrivatePtr driver, virDomainObjPtr vm)
++libxlDomainDeathThread(void *opaque)
+ {
++ struct libxlEventHandlerThreadInfo *death_info = opaque;
++ virDomainObjPtr vm = death_info->vm;
++ libxl_event *ev = death_info->event;
++ libxlDriverPrivatePtr driver = death_info->driver;
+ virObjectEventPtr dom_event = NULL;
++ g_autoptr(libxlDriverConfig) cfg = libxlDriverConfigGet(driver);
+
+ if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0)
+- return;
++ goto cleanup;
+
+ virDomainObjSetState(vm, VIR_DOMAIN_SHUTOFF,
VIR_DOMAIN_SHUTOFF_DESTROYED);
+ dom_event = virDomainEventLifecycleNewFromObj(vm,
+@@ -627,6 +632,11 @@ libxlDomainHandleDeath(libxlDriverPrivatePtr driver,
virDomainObjPtr vm)
+ virDomainObjListRemove(driver->domains, vm);
+ libxlDomainObjEndJob(driver, vm);
+ virObjectEventStateQueue(driver->domainEventState, dom_event);
++
++ cleanup:
++ virDomainObjEndAPI(&vm);
++ libxl_event_free(cfg->ctx, ev);
++ VIR_FREE(death_info);
+ }
+
+
+@@ -640,6 +650,9 @@ libxlDomainEventHandler(void *data, VIR_LIBXL_EVENT_CONST
libxl_event *event)
+ libxl_shutdown_reason xl_reason =
event->u.domain_shutdown.shutdown_reason;
+ virDomainObjPtr vm = NULL;
+ g_autoptr(libxlDriverConfig) cfg = NULL;
++ struct libxlEventHandlerThreadInfo *thread_info = NULL;
++ virThread thread;
++ g_autofree char *thread_name = NULL;
+
+ if (event->type != LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN &&
+ event->type != LIBXL_EVENT_TYPE_DOMAIN_DEATH) {
+@@ -660,31 +673,27 @@ libxlDomainEventHandler(void *data,
VIR_LIBXL_EVENT_CONST libxl_event *event)
+ goto cleanup;
+ }
+
++ /*
++ * Start event-specific threads to handle shutdown and death.
++ * They are potentially lengthy operations and we don't want to be
++ * blocking this event handler while they are in progress.
++ */
+ if (event->type == LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
+- struct libxlEventHandlerThreadInfo *shutdown_info = NULL;
+- virThread thread;
+- g_autofree char *name = NULL;
+-
+- /*
+- * Start a thread to handle shutdown. We don't want to be tying up
+- * libxl's event machinery by doing a potentially lengthy shutdown.
+- */
+- shutdown_info = g_new0(struct libxlEventHandlerThreadInfo, 1);
++ thread_info = g_new0(struct libxlEventHandlerThreadInfo, 1);
+
+- shutdown_info->driver = driver;
+- shutdown_info->vm = vm;
+- shutdown_info->event = (libxl_event *)event;
+- name = g_strdup_printf("shutdown-event-%d", event->domid);
++ thread_info->driver = driver;
++ thread_info->vm = vm;
++ thread_info->event = (libxl_event *)event;
++ thread_name = g_strdup_printf("shutdown-event-%d", event->domid);
+ /*
+ * Cleanup will be handled by the shutdown thread.
+ */
+ if (virThreadCreateFull(&thread, false, libxlDomainShutdownThread,
+- name, false, shutdown_info) < 0) {
++ thread_name, false, thread_info) < 0) {
+ /*
+ * Not much we can do on error here except log it.
+ */
+ VIR_ERROR(_("Failed to create thread to handle domain shutdown"));
+- VIR_FREE(shutdown_info);
+ goto cleanup;
+ }
+ /*
+@@ -693,15 +702,33 @@ libxlDomainEventHandler(void *data,
VIR_LIBXL_EVENT_CONST libxl_event *event)
+ */
+ return;
+ } else if (event->type == LIBXL_EVENT_TYPE_DOMAIN_DEATH) {
++ thread_info = g_new0(struct libxlEventHandlerThreadInfo, 1);
++
++ thread_info->driver = driver;
++ thread_info->vm = vm;
++ thread_info->event = (libxl_event *)event;
++ thread_name = g_strdup_printf("death-event-%d", event->domid);
+ /*
+- * On death the domain is cleaned up from Xen's perspective.
+- * Cleanup on the libvirt side can be done synchronously.
++ * Cleanup will be handled by the death thread.
+ */
+- libxlDomainHandleDeath(driver, vm);
++ if (virThreadCreateFull(&thread, false, libxlDomainDeathThread,
++ thread_name, false, thread_info) < 0) {
++ /*
++ * Not much we can do on error here except log it.
++ */
++ VIR_ERROR(_("Failed to create thread to handle domain death"));
++ goto cleanup;
++ }
++ /*
++ * virDomainObjEndAPI is called in the death thread, where
++ * libxlEventHandlerThreadInfo and libxl_event are also freed.
++ */
++ return;
+ }
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
++ VIR_FREE(thread_info);
+ cfg = libxlDriverConfigGet(driver);
+ /* Cast away any const */
+ libxl_event_free(cfg->ctx, (libxl_event *)event);
diff -Nru libvirt-7.0.0/debian/patches/CVE-2021-4147_5.patch
libvirt-7.0.0/debian/patches/CVE-2021-4147_5.patch
--- libvirt-7.0.0/debian/patches/CVE-2021-4147_5.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2021-4147_5.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,172 @@
+From: Jim Fehlig <jfeh...@suse.com>
+Date: Wed, 24 Nov 2021 11:48:51 -0700
+Subject: libxl: Search for virDomainObj in event handler threads
+
+libxl can deliver events and invoke callbacks on any application thread
+calling into libxl. This can cause deadlock in the libvirt libxl driver
+
+Thread 19 (Thread 0x7f31411ec700 (LWP 14068) "libvirtd"):
+#0 0x00007f318520cc7d in __lll_lock_wait () from /lib64/libpthread.so.0
+#1 0x00007f3185205ed5 in pthread_mutex_lock () from /lib64/libpthread.so.0
+#2 0x00007f3189488015 in virMutexLock (m=<optimized out>) at
../../src/util/virthread.c:79
+#3 0x00007f3189463f3b in virObjectLock (anyobj=<optimized out>) at
../../src/util/virobject.c:433
+#4 0x00007f31894f2f41 in virDomainObjListSearchID (payload=0x7f317400a6d0,
name=<optimized out>, data=0x7f31411eaeac) at
../../src/conf/virdomainobjlist.c:105
+#5 0x00007f3189437ac5 in virHashSearch (ctable=0x7f3124025a30,
iter=iter@entry=0x7f31894f2f30 <virDomainObjListSearchID>,
data=data@entry=0x7f31411eaeac, name=name@entry=0x0) at
../../src/util/virhash.c:745
+#6 0x00007f31894f3919 in virDomainObjListFindByID (doms=0x7f3124025430,
id=<optimized out>) at ../../src/conf/virdomainobjlist.c:121
+#7 0x00007f3152f292e5 in libxlDomainEventHandler (data=0x7f3124023d80,
event=0x7f310c010ae0) at ../../src/libxl/libxl_domain.c:660
+#8 0x00007f3152c6ff5d in egc_run_callbacks (egc=egc@entry=0x7f31411eaf50) at
libxl_event.c:1427
+#9 0x00007f3152c718bd in libxl__egc_cleanup (egc=0x7f31411eaf50) at
libxl_event.c:1458
+#10 libxl__ao_inprogress (ao=ao@entry=0x7f310c00b8a0,
file=file@entry=0x7f3152cce987 "libxl_domain.c", line=line@entry=730,
func=func@entry=0x7f3152ccf750 <__func__.22238> "libxl_domain_unpause") at
libxl_event.c:2047
+#11 0x00007f3152c8c5b8 in libxl_domain_unpause (ctx=0x7f3124015a40,
domid=<optimized out>, ao_how=ao_how@entry=0x0) at libxl_domain.c:730
+#12 0x00007f3152f2a584 in libxl_domain_unpause_0x041200 (domid=<optimized
out>, ctx=<optimized out>) at /usr/include/libxl.h:1756
+#13 libxlDomainStart (driver=driver@entry=0x7f3124023d80,
vm=vm@entry=0x7f317400a6d0, start_paused=start_paused@entry=false,
restore_fd=restore_fd@entry=-1, restore_ver=<optimized out>,
restore_ver@entry=2) at ../../src/libxl/libxl_domain.c:1482
+#14 0x00007f3152f2a6e3 in libxlDomainStartNew
(driver=driver@entry=0x7f3124023d80, vm=vm@entry=0x7f317400a6d0,
start_paused=start_paused@entry=false) at ../../src/libxl/libxl_domain.c:1545
+#15 0x00007f3152f2a789 in libxlDomainShutdownHandleRestart
(driver=0x7f3124023d80, vm=0x7f317400a6d0) at ../../src/libxl/libxl_domain.c:464
+#16 0x00007f3152f2a9e4 in libxlDomainShutdownThread (opaque=<optimized out>)
at ../../src/libxl/libxl_domain.c:559
+#17 0x00007f3189487ee2 in virThreadHelper (data=<optimized out>) at
../../src/util/virthread.c:196
+#18 0x00007f3185203539 in start_thread () from /lib64/libpthread.so.0
+#19 0x00007f3184f3becf in clone () from /lib64/libc.so.6
+
+Frame 16 runs a thread created to handle domain shutdown processing for
+domid 28712. In this case the event contained the reboot reason, so the
+old domain is destroyed and a new one is created by libxlDomainStart new.
+After starting the domain, it is unpaused by calling libxl_domain_unpause
+in frame 12. While the thread is running within libxl, libxl takes the
+opportunity to deliver a pending domain shutdown event for unrelated domid
+28710. While searching for the associated virDomainObj by ID, a deadlock is
+encountered when attempting to lock the virDomainObj for domid 28712, which
+is already locked since this thread is processing its shutdown event.
+
+The deadlock can be avoided by moving the search for a virDomainObj
+associated with the event domid to the shutdown thread. The same is done
+for the death thread.
+
+Signed-off-by: Jim Fehlig <jfeh...@suse.com>
+Reviewed-by: Daniel P. Berrangé <berra...@redhat.com>
+Reviewed-by: Ján Tomko <jto...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/5c5df5310f72be4878a71ace47074c54e0d1a27d
+Bug: https://bugzilla.redhat.com/show_bug.cgi?id=2034195
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2021-4147
+Bug-Debian: https://bugs.debian.org/1002535
+---
+ src/libxl/libxl_domain.c | 35 ++++++++++++++++++-----------------
+ 1 file changed, 18 insertions(+), 17 deletions(-)
+
+diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
+index f541469..0127211 100644
+--- a/src/libxl/libxl_domain.c
++++ b/src/libxl/libxl_domain.c
+@@ -476,7 +476,6 @@ libxlDomainShutdownHandleRestart(libxlDriverPrivatePtr
driver,
+ struct libxlEventHandlerThreadInfo
+ {
+ libxlDriverPrivatePtr driver;
+- virDomainObjPtr vm;
+ libxl_event *event;
+ };
+
+@@ -485,7 +484,7 @@ static void
+ libxlDomainShutdownThread(void *opaque)
+ {
+ struct libxlEventHandlerThreadInfo *shutdown_info = opaque;
+- virDomainObjPtr vm = shutdown_info->vm;
++ virDomainObjPtr vm = NULL;
+ libxl_event *ev = shutdown_info->event;
+ libxlDriverPrivatePtr driver = shutdown_info->driver;
+ virObjectEventPtr dom_event = NULL;
+@@ -495,6 +494,12 @@ libxlDomainShutdownThread(void *opaque)
+
+ libxl_domain_config_init(&d_config);
+
++ vm = virDomainObjListFindByID(driver->domains, ev->domid);
++ if (!vm) {
++ /* Nothing to do if we can't find the virDomainObj */
++ goto cleanup;
++ }
++
+ if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0)
+ goto cleanup;
+
+@@ -614,12 +619,18 @@ static void
+ libxlDomainDeathThread(void *opaque)
+ {
+ struct libxlEventHandlerThreadInfo *death_info = opaque;
+- virDomainObjPtr vm = death_info->vm;
++ virDomainObjPtr vm = NULL;
+ libxl_event *ev = death_info->event;
+ libxlDriverPrivatePtr driver = death_info->driver;
+ virObjectEventPtr dom_event = NULL;
+ g_autoptr(libxlDriverConfig) cfg = libxlDriverConfigGet(driver);
+
++ vm = virDomainObjListFindByID(driver->domains, ev->domid);
++ if (!vm) {
++ /* Nothing to do if we can't find the virDomainObj */
++ goto cleanup;
++ }
++
+ if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0)
+ goto cleanup;
+
+@@ -648,7 +659,6 @@ libxlDomainEventHandler(void *data, VIR_LIBXL_EVENT_CONST
libxl_event *event)
+ {
+ libxlDriverPrivatePtr driver = data;
+ libxl_shutdown_reason xl_reason =
event->u.domain_shutdown.shutdown_reason;
+- virDomainObjPtr vm = NULL;
+ g_autoptr(libxlDriverConfig) cfg = NULL;
+ struct libxlEventHandlerThreadInfo *thread_info = NULL;
+ virThread thread;
+@@ -667,12 +677,6 @@ libxlDomainEventHandler(void *data, VIR_LIBXL_EVENT_CONST
libxl_event *event)
+ if (xl_reason == LIBXL_SHUTDOWN_REASON_SUSPEND)
+ goto cleanup;
+
+- vm = virDomainObjListFindByID(driver->domains, event->domid);
+- if (!vm) {
+- /* Nothing to do if we can't find the virDomainObj */
+- goto cleanup;
+- }
+-
+ /*
+ * Start event-specific threads to handle shutdown and death.
+ * They are potentially lengthy operations and we don't want to be
+@@ -682,7 +686,6 @@ libxlDomainEventHandler(void *data, VIR_LIBXL_EVENT_CONST
libxl_event *event)
+ thread_info = g_new0(struct libxlEventHandlerThreadInfo, 1);
+
+ thread_info->driver = driver;
+- thread_info->vm = vm;
+ thread_info->event = (libxl_event *)event;
+ thread_name = g_strdup_printf("shutdown-event-%d", event->domid);
+ /*
+@@ -697,15 +700,14 @@ libxlDomainEventHandler(void *data,
VIR_LIBXL_EVENT_CONST libxl_event *event)
+ goto cleanup;
+ }
+ /*
+- * virDomainObjEndAPI is called in the shutdown thread, where
+- * libxlEventHandlerThreadInfo and libxl_event are also freed.
++ * libxlEventHandlerThreadInfo and libxl_event are freed in the
++ * shutdown thread
+ */
+ return;
+ } else if (event->type == LIBXL_EVENT_TYPE_DOMAIN_DEATH) {
+ thread_info = g_new0(struct libxlEventHandlerThreadInfo, 1);
+
+ thread_info->driver = driver;
+- thread_info->vm = vm;
+ thread_info->event = (libxl_event *)event;
+ thread_name = g_strdup_printf("death-event-%d", event->domid);
+ /*
+@@ -720,14 +722,13 @@ libxlDomainEventHandler(void *data,
VIR_LIBXL_EVENT_CONST libxl_event *event)
+ goto cleanup;
+ }
+ /*
+- * virDomainObjEndAPI is called in the death thread, where
+- * libxlEventHandlerThreadInfo and libxl_event are also freed.
++ * libxlEventHandlerThreadInfo and libxl_event are freed in the
++ * death thread
+ */
+ return;
+ }
+
+ cleanup:
+- virDomainObjEndAPI(&vm);
+ VIR_FREE(thread_info);
+ cfg = libxlDriverConfigGet(driver);
+ /* Cast away any const */
diff -Nru libvirt-7.0.0/debian/patches/CVE-2021-4147_6.patch
libvirt-7.0.0/debian/patches/CVE-2021-4147_6.patch
--- libvirt-7.0.0/debian/patches/CVE-2021-4147_6.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2021-4147_6.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,90 @@
+From: Jim Fehlig <jfeh...@suse.com>
+Date: Thu, 18 Nov 2021 12:03:20 -0700
+Subject: libxl: Protect access to libxlLogger files hash table
+
+The hash table of log file objects in libxlLogger is not protected against
+concurrent access. It is possible for one thread to remove an entry while
+another is updating it. Add a mutex to the libxlLogger object and lock it
+when accessing the files hash table.
+
+Signed-off-by: Jim Fehlig <jfeh...@suse.com>
+Reviewed-by: Daniel P. Berrangé <berra...@redhat.com>
+Reviewed-by: Ján Tomko <jto...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/a7a03324d86e111f81687b5315b8f296dde84340
+Bug: https://bugzilla.redhat.com/show_bug.cgi?id=2034195
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2021-4147
+Bug-Debian: https://bugs.debian.org/1002535
+---
+ src/libxl/libxl_logger.c | 14 ++++++++++++++
+ 1 file changed, 14 insertions(+)
+
+diff --git a/src/libxl/libxl_logger.c b/src/libxl/libxl_logger.c
+index 93a9c76..4113d67 100644
+--- a/src/libxl/libxl_logger.c
++++ b/src/libxl/libxl_logger.c
+@@ -28,6 +28,7 @@
+ #include "util/virfile.h"
+ #include "util/virhash.h"
+ #include "util/virstring.h"
++#include "util/virthread.h"
+ #include "util/virtime.h"
+
+ #define VIR_FROM_THIS VIR_FROM_LIBXL
+@@ -43,6 +44,7 @@ struct xentoollog_logger_libvirt {
+
+ /* map storing the opened fds: "domid" -> FILE* */
+ GHashTable *files;
++ virMutex tableLock;
+ FILE *defaultLogFile;
+ };
+
+@@ -85,7 +87,9 @@ libvirt_vmessage(xentoollog_logger *logger_in,
+ start = start + 9;
+ *end = '\0';
+
++ virMutexLock(&lg->tableLock);
+ domainLogFile = virHashLookup(lg->files, start);
++ virMutexUnlock(&lg->tableLock);
+ if (domainLogFile)
+ logFile = domainLogFile;
+
+@@ -161,6 +165,11 @@ libxlLoggerNew(const char *logDir, virLogPriority
minLevel)
+ if ((logger.defaultLogFile = fopen(path, "a")) == NULL)
+ goto error;
+
++ if (virMutexInit(&logger.tableLock) < 0) {
++ VIR_FORCE_FCLOSE(logger.defaultLogFile);
++ goto error;
++ }
++
+ logger_out = XTL_NEW_LOGGER(libvirt, logger);
+
+ cleanup:
+@@ -179,6 +188,7 @@ libxlLoggerFree(libxlLoggerPtr logger)
+ if (logger->defaultLogFile)
+ VIR_FORCE_FCLOSE(logger->defaultLogFile);
+ virHashFree(logger->files);
++ virMutexDestroy(&logger->tableLock);
+ xtl_logger_destroy(xtl_logger);
+ }
+
+@@ -200,7 +210,9 @@ libxlLoggerOpenFile(libxlLoggerPtr logger,
+ path, g_strerror(errno));
+ goto cleanup;
+ }
++ virMutexLock(&logger->tableLock);
+ ignore_value(virHashAddEntry(logger->files, domidstr, logFile));
++ virMutexUnlock(&logger->tableLock);
+
+ /* domain_config is non NULL only when starting a new domain */
+ if (domain_config) {
+@@ -219,7 +231,9 @@ libxlLoggerCloseFile(libxlLoggerPtr logger, int id)
+ char *domidstr = NULL;
+ domidstr = g_strdup_printf("%d", id);
+
++ virMutexLock(&logger->tableLock);
+ ignore_value(virHashRemoveEntry(logger->files, domidstr));
++ virMutexUnlock(&logger->tableLock);
+
+ VIR_FREE(domidstr);
+ }
diff -Nru libvirt-7.0.0/debian/patches/CVE-2022-0897.patch
libvirt-7.0.0/debian/patches/CVE-2022-0897.patch
--- libvirt-7.0.0/debian/patches/CVE-2022-0897.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2022-0897.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,48 @@
+From: Daniel P. Berrangé <berra...@redhat.com>
+Date: Tue, 8 Mar 2022 17:28:38 +0000
+Subject: nwfilter: fix crash when counting number of network filters
+
+The virNWFilterObjListNumOfNWFilters method iterates over the
+driver->nwfilters, accessing virNWFilterObj instances. As such
+it needs to be protected against concurrent modification of
+the driver->nwfilters object.
+
+This API allows unprivileged users to connect, so users with
+read-only access to libvirt can cause a denial of service
+crash if they are able to race with a call of virNWFilterUndefine.
+Since network filters are usually statically defined, this is
+considered a low severity problem.
+
+This is assigned CVE-2022-0897.
+
+Reviewed-by: Eric Blake <ebl...@redhat.com>
+Signed-off-by: Daniel P. Berrangé <berra...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/a4947e8f63c3e6b7b067b444f3d6cf674c0d7f36
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2022-0897
+Bug-Debian: https://bugs.debian.org/1009075
+---
+ src/nwfilter/nwfilter_driver.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/src/nwfilter/nwfilter_driver.c b/src/nwfilter/nwfilter_driver.c
+index 1b8e3db..4b09518 100644
+--- a/src/nwfilter/nwfilter_driver.c
++++ b/src/nwfilter/nwfilter_driver.c
+@@ -478,11 +478,15 @@ nwfilterLookupByName(virConnectPtr conn,
+ static int
+ nwfilterConnectNumOfNWFilters(virConnectPtr conn)
+ {
++ int ret;
+ if (virConnectNumOfNWFiltersEnsureACL(conn) < 0)
+ return -1;
+
+- return virNWFilterObjListNumOfNWFilters(driver->nwfilters, conn,
+- virConnectNumOfNWFiltersCheckACL);
++ nwfilterDriverLock();
++ ret = virNWFilterObjListNumOfNWFilters(driver->nwfilters, conn,
++ virConnectNumOfNWFiltersCheckACL);
++ nwfilterDriverUnlock();
++ return ret;
+ }
+
+
diff -Nru libvirt-7.0.0/debian/patches/CVE-2024-1441.patch
libvirt-7.0.0/debian/patches/CVE-2024-1441.patch
--- libvirt-7.0.0/debian/patches/CVE-2024-1441.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2024-1441.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,35 @@
+From: Martin Kletzander <mklet...@redhat.com>
+Date: Tue, 27 Feb 2024 16:20:12 +0100
+Subject: Fix off-by-one error in udevListInterfacesByStatus
+
+Ever since this function was introduced in 2012 it could've tried
+filling in an extra interface name. That was made worse in 2019 when
+the caller functions started accepting NULL arrays of size 0.
+
+This is assigned CVE-2024-1441.
+
+Signed-off-by: Martin Kletzander <mklet...@redhat.com>
+Reported-by: Alexander Kuznetsov <kuznetso...@altlinux.org>
+Fixes: 5a33366f5c0b18c93d161bd144f9f079de4ac8ca
+Fixes: d6064e2759a24e0802f363e3a810dc5a7d7ebb15
+Reviewed-by: Ján Tomko <jto...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/c664015fe3a7bf59db26686e9ed69af011c6ebb8
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2024-1441
+Bug-Debian: https://bugs.debian.org/1066058
+---
+ src/interface/interface_backend_udev.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/interface/interface_backend_udev.c
b/src/interface/interface_backend_udev.c
+index 6a94a45..65a5244 100644
+--- a/src/interface/interface_backend_udev.c
++++ b/src/interface/interface_backend_udev.c
+@@ -221,7 +221,7 @@ udevListInterfacesByStatus(virConnectPtr conn,
+ virInterfaceDefPtr def;
+
+ /* Ensure we won't exceed the size of our array */
+- if (count > names_len)
++ if (count >= names_len)
+ break;
+
+ path = udev_list_entry_get_name(dev_entry);
diff -Nru libvirt-7.0.0/debian/patches/CVE-2024-2494.patch
libvirt-7.0.0/debian/patches/CVE-2024-2494.patch
--- libvirt-7.0.0/debian/patches/CVE-2024-2494.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2024-2494.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,212 @@
+From: Daniel P. Berrangé <berra...@redhat.com>
+Date: Fri, 15 Mar 2024 10:47:50 +0000
+Subject: remote: check for negative array lengths before allocation
+
+While the C API entry points will validate non-negative lengths
+for various parameters, the RPC server de-serialization code
+will need to allocate memory for arrays before entering the C
+API. These allocations will thus happen before the non-negative
+length check is performed.
+
+Passing a negative length to the g_new0 function will usually
+result in a crash due to the negative length being treated as
+a huge positive number.
+
+This was found and diagnosed by ALT Linux Team with AFLplusplus.
+
+Reviewed-by: Michal Privoznik <mpriv...@redhat.com>
+Found-by: Alexandr Shashkin <duty...@altlinux.org>
+Co-developed-by: Alexander Kuznetsov <kuznetso...@altlinux.org>
+Signed-off-by: Daniel P. Berrangé <berra...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/8a3f8d957507c1f8223fdcf25a3ff885b15557f2
+Bug: https://bugzilla.redhat.com/show_bug.cgi?id=2270115
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2024-2494
+Bug-Debian: https://bugs.debian.org/1067461
+---
+ src/remote/remote_daemon_dispatch.c | 65 +++++++++++++++++++++++++++++++++++++
+ src/rpc/gendispatch.pl | 5 +++
+ 2 files changed, 70 insertions(+)
+
+diff --git a/src/remote/remote_daemon_dispatch.c
b/src/remote/remote_daemon_dispatch.c
+index 46683aa..4def952 100644
+--- a/src/remote/remote_daemon_dispatch.c
++++ b/src/remote/remote_daemon_dispatch.c
+@@ -2330,6 +2330,10 @@
remoteDispatchDomainGetSchedulerParameters(virNetServerPtr server G_GNUC_UNUSED,
+ if (!conn)
+ goto cleanup;
+
++ if (args->nparams < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->nparams > REMOTE_DOMAIN_SCHEDULER_PARAMETERS_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
+ goto cleanup;
+@@ -2378,6 +2382,10 @@
remoteDispatchDomainGetSchedulerParametersFlags(virNetServerPtr server G_GNUC_UN
+ if (!conn)
+ goto cleanup;
+
++ if (args->nparams < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->nparams > REMOTE_DOMAIN_SCHEDULER_PARAMETERS_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
+ goto cleanup;
+@@ -2536,6 +2544,10 @@ remoteDispatchDomainBlockStatsFlags(virNetServerPtr
server G_GNUC_UNUSED,
+ goto cleanup;
+ flags = args->flags;
+
++ if (args->nparams < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->nparams > REMOTE_DOMAIN_BLOCK_STATS_PARAMETERS_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
+ goto cleanup;
+@@ -2764,6 +2776,14 @@ remoteDispatchDomainGetVcpuPinInfo(virNetServerPtr
server G_GNUC_UNUSED,
+ if (!(dom = get_nonnull_domain(conn, args->dom)))
+ goto cleanup;
+
++ if (args->ncpumaps < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("ncpumaps must be
non-negative"));
++ goto cleanup;
++ }
++ if (args->maplen < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("maplen must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->ncpumaps > REMOTE_VCPUINFO_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("ncpumaps >
REMOTE_VCPUINFO_MAX"));
+ goto cleanup;
+@@ -2858,6 +2878,11 @@ remoteDispatchDomainGetEmulatorPinInfo(virNetServerPtr
server G_GNUC_UNUSED,
+ if (!(dom = get_nonnull_domain(conn, args->dom)))
+ goto cleanup;
+
++ if (args->maplen < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("maplen must be
non-negative"));
++ goto cleanup;
++ }
++
+ /* Allocate buffers to take the results */
+ if (args->maplen > 0)
+ cpumaps = g_new0(unsigned char, args->maplen);
+@@ -2905,6 +2930,14 @@ remoteDispatchDomainGetVcpus(virNetServerPtr server
G_GNUC_UNUSED,
+ if (!(dom = get_nonnull_domain(conn, args->dom)))
+ goto cleanup;
+
++ if (args->maxinfo < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("maxinfo must be
non-negative"));
++ goto cleanup;
++ }
++ if (args->maplen < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("maplen must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->maxinfo > REMOTE_VCPUINFO_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("maxinfo >
REMOTE_VCPUINFO_MAX"));
+ goto cleanup;
+@@ -3145,6 +3178,10 @@ remoteDispatchDomainGetMemoryParameters(virNetServerPtr
server G_GNUC_UNUSED,
+
+ flags = args->flags;
+
++ if (args->nparams < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->nparams > REMOTE_DOMAIN_MEMORY_PARAMETERS_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
+ goto cleanup;
+@@ -3205,6 +3242,10 @@ remoteDispatchDomainGetNumaParameters(virNetServerPtr
server G_GNUC_UNUSED,
+
+ flags = args->flags;
+
++ if (args->nparams < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->nparams > REMOTE_DOMAIN_NUMA_PARAMETERS_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
+ goto cleanup;
+@@ -3265,6 +3306,10 @@ remoteDispatchDomainGetBlkioParameters(virNetServerPtr
server G_GNUC_UNUSED,
+
+ flags = args->flags;
+
++ if (args->nparams < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->nparams > REMOTE_DOMAIN_BLKIO_PARAMETERS_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
+ goto cleanup;
+@@ -3326,6 +3371,10 @@ remoteDispatchNodeGetCPUStats(virNetServerPtr server
G_GNUC_UNUSED,
+
+ flags = args->flags;
+
++ if (args->nparams < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->nparams > REMOTE_NODE_CPU_STATS_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
+ goto cleanup;
+@@ -3393,6 +3442,10 @@ remoteDispatchNodeGetMemoryStats(virNetServerPtr server
G_GNUC_UNUSED,
+
+ flags = args->flags;
+
++ if (args->nparams < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->nparams > REMOTE_NODE_MEMORY_STATS_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
+ goto cleanup;
+@@ -3573,6 +3626,10 @@ remoteDispatchDomainGetBlockIoTune(virNetServerPtr
server G_GNUC_UNUSED,
+ if (!conn)
+ goto cleanup;
+
++ if (args->nparams < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->nparams > REMOTE_DOMAIN_BLOCK_IO_TUNE_PARAMETERS_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
+ goto cleanup;
+@@ -5117,6 +5174,10 @@
remoteDispatchDomainGetInterfaceParameters(virNetServerPtr server G_GNUC_UNUSED,
+
+ flags = args->flags;
+
++ if (args->nparams < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->nparams > REMOTE_DOMAIN_INTERFACE_PARAMETERS_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
+ goto cleanup;
+@@ -5337,6 +5398,10 @@ remoteDispatchNodeGetMemoryParameters(virNetServerPtr
server G_GNUC_UNUSED,
+
+ flags = args->flags;
+
++ if (args->nparams < 0) {
++ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be
non-negative"));
++ goto cleanup;
++ }
+ if (args->nparams > REMOTE_NODE_MEMORY_PARAMETERS_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
+ goto cleanup;
+diff --git a/src/rpc/gendispatch.pl b/src/rpc/gendispatch.pl
+index 0020273..84b239c 100755
+--- a/src/rpc/gendispatch.pl
++++ b/src/rpc/gendispatch.pl
+@@ -1073,6 +1073,11 @@ elsif ($mode eq "server") {
+ print "\n";
+
+ if ($single_ret_as_list) {
++ print " if (args->$single_ret_list_max_var < 0) {\n";
++ print " virReportError(VIR_ERR_RPC,\n";
++ print " \"%s\",
_(\"max$single_ret_list_name must be non-negative\"));\n";
++ print " goto cleanup;\n";
++ print " }\n";
+ print " if (args->$single_ret_list_max_var >
$single_ret_list_max_define) {\n";
+ print " virReportError(VIR_ERR_RPC,\n";
+ print " \"%s\",
_(\"max$single_ret_list_name > $single_ret_list_max_define\"));\n";
diff -Nru libvirt-7.0.0/debian/patches/CVE-2024-2496.patch
libvirt-7.0.0/debian/patches/CVE-2024-2496.patch
--- libvirt-7.0.0/debian/patches/CVE-2024-2496.patch 1970-01-01
01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/CVE-2024-2496.patch 2024-07-30
21:35:28.000000000 +0200
@@ -0,0 +1,86 @@
+From: Dmitry Frolov <fro...@swemel.ru>
+Date: Tue, 12 Sep 2023 15:56:47 +0300
+Subject: interface: fix udev_device_get_sysattr_value return value check
+
+Reviewing the code I found that return value of function
+udev_device_get_sysattr_value() is dereferenced without a check.
+udev_device_get_sysattr_value() may return NULL by number of reasons.
+
+v2: VIR_DEBUG added, replaced STREQ(NULLSTR()) with STREQ_NULLABLE()
+v3: More checks added, to skip earlier. More verbose VIR_DEBUG.
+
+Signed-off-by: Dmitry Frolov <fro...@swemel.ru>
+Reviewed-by: Martin Kletzander <mklet...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/2ca94317ac642a70921947150ced8acc674ccdc8
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2024-2496
+---
+ src/interface/interface_backend_udev.c | 26 +++++++++++++++++++-------
+ 1 file changed, 19 insertions(+), 7 deletions(-)
+
+diff --git a/src/interface/interface_backend_udev.c
b/src/interface/interface_backend_udev.c
+index 65a5244..74b24e8 100644
+--- a/src/interface/interface_backend_udev.c
++++ b/src/interface/interface_backend_udev.c
+@@ -23,6 +23,7 @@
+ #include <dirent.h>
+ #include <libudev.h>
+
++#include "virlog.h"
+ #include "virerror.h"
+ #include "virfile.h"
+ #include "datatypes.h"
+@@ -41,6 +42,8 @@
+
+ #define VIR_FROM_THIS VIR_FROM_INTERFACE
+
++VIR_LOG_INIT("interface.interface_backend_udev");
++
+ struct udev_iface_driver {
+ struct udev *udev;
+ /* pid file FD, ensures two copies of the driver can't use the same root
*/
+@@ -357,11 +360,20 @@ udevConnectListAllInterfaces(virConnectPtr conn,
+ const char *macaddr;
+ virInterfaceDefPtr def;
+
+- path = udev_list_entry_get_name(dev_entry);
+- dev = udev_device_new_from_syspath(udev, path);
+- name = udev_device_get_sysname(dev);
++ if (!(path = udev_list_entry_get_name(dev_entry))) {
++ VIR_DEBUG("Skipping interface, path == NULL");
++ continue;
++ }
++ if (!(dev = udev_device_new_from_syspath(udev, path))) {
++ VIR_DEBUG("Skipping interface '%s', dev == NULL", path);
++ continue;
++ }
++ if (!(name = udev_device_get_sysname(dev))) {
++ VIR_DEBUG("Skipping interface '%s', name == NULL", path);
++ continue;
++ }
+ macaddr = udev_device_get_sysattr_value(dev, "address");
+- status = STREQ(udev_device_get_sysattr_value(dev, "operstate"), "up");
++ status = STREQ_NULLABLE(udev_device_get_sysattr_value(dev,
"operstate"), "up");
+
+ def = udevGetMinimalDefForDevice(dev);
+ if (!virConnectListAllInterfacesCheckACL(conn, def)) {
+@@ -976,9 +988,9 @@ udevGetIfaceDef(struct udev *udev, const char *name)
+
+ /* MTU */
+ mtu_str = udev_device_get_sysattr_value(dev, "mtu");
+- if (virStrToLong_ui(mtu_str, NULL, 10, &mtu) < 0) {
++ if (!mtu_str || virStrToLong_ui(mtu_str, NULL, 10, &mtu) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+- _("Could not parse MTU value '%s'"), mtu_str);
++ _("Could not parse MTU value '%s'"), NULLSTR(mtu_str));
+ goto error;
+ }
+ ifacedef->mtu = mtu;
+@@ -1105,7 +1117,7 @@ udevInterfaceIsActive(virInterfacePtr ifinfo)
+ goto cleanup;
+
+ /* Check if it's active or not */
+- status = STREQ(udev_device_get_sysattr_value(dev, "operstate"), "up");
++ status = STREQ_NULLABLE(udev_device_get_sysattr_value(dev, "operstate"),
"up");
+
+ udev_device_unref(dev);
+
diff -Nru libvirt-7.0.0/debian/patches/libxl-Fix-domain-shutdown.patch
libvirt-7.0.0/debian/patches/libxl-Fix-domain-shutdown.patch
--- libvirt-7.0.0/debian/patches/libxl-Fix-domain-shutdown.patch
1970-01-01 01:00:00.000000000 +0100
+++ libvirt-7.0.0/debian/patches/libxl-Fix-domain-shutdown.patch
2024-07-30 21:35:28.000000000 +0200
@@ -0,0 +1,226 @@
+From: Jim Fehlig <jfeh...@suse.com>
+Date: Fri, 19 Feb 2021 16:29:10 -0700
+Subject: libxl: Fix domain shutdown
+
+Commit fa30ee04a2 caused a regression in normal domain shutown.
+Initiating a shutdown from within the domain or via 'virsh shutdown'
+does cause the guest OS running in the domain to shutdown, but libvirt
+never reaps the domain so it is always shown in a running state until
+calling 'virsh destroy'.
+
+The shutdown thread is also an internal user of the driver shutdown
+machinery and eventually calls libxlDomainDestroyInternal where
+the ignoreDeathEvent inhibitor is set, but running in a thread
+introduces the possibility of racing with the death event from
+libxl. This can be prevented by setting ignoreDeathEvent before
+running the shutdown thread.
+
+An additional improvement is to handle the destroy event synchronously
+instead of spawning a thread. The time consuming aspects of destroying
+a domain have been completed when the destroy event is delivered.
+
+Signed-off-by: Jim Fehlig <jfeh...@suse.com>
+Reviewed-by: Michal Privoznik <mpriv...@redhat.com>
+Origin:
https://gitlab.com/libvirt/libvirt/-/commit/87a9d3a6b01baebdca33d95ad0e79781b6a46ca8
+Bug: https://bugzilla.redhat.com/show_bug.cgi?id=2034195
+Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2021-4147
+Bug-Debian: https://bugs.debian.org/1002535
+---
+ src/libxl/libxl_domain.c | 120 ++++++++++++++++++++++-------------------------
+ 1 file changed, 57 insertions(+), 63 deletions(-)
+
+diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
+index afa21bf..63938d5 100644
+--- a/src/libxl/libxl_domain.c
++++ b/src/libxl/libxl_domain.c
+@@ -476,6 +476,7 @@ libxlDomainShutdownHandleRestart(libxlDriverPrivatePtr
driver,
+ struct libxlShutdownThreadInfo
+ {
+ libxlDriverPrivatePtr driver;
++ virDomainObjPtr vm;
+ libxl_event *event;
+ };
+
+@@ -484,7 +485,7 @@ static void
+ libxlDomainShutdownThread(void *opaque)
+ {
+ struct libxlShutdownThreadInfo *shutdown_info = opaque;
+- virDomainObjPtr vm = NULL;
++ virDomainObjPtr vm = shutdown_info->vm;
+ libxl_event *ev = shutdown_info->event;
+ libxlDriverPrivatePtr driver = shutdown_info->driver;
+ virObjectEventPtr dom_event = NULL;
+@@ -494,12 +495,6 @@ libxlDomainShutdownThread(void *opaque)
+
+ libxl_domain_config_init(&d_config);
+
+- vm = virDomainObjListFindByID(driver->domains, ev->domid);
+- if (!vm) {
+- VIR_INFO("Received event for unknown domain ID %d", ev->domid);
+- goto cleanup;
+- }
+-
+ if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0)
+ goto cleanup;
+
+@@ -616,32 +611,18 @@ libxlDomainShutdownThread(void *opaque)
+ }
+
+ static void
+-libxlDomainDeathThread(void *opaque)
++libxlDomainHandleDeath(libxlDriverPrivatePtr driver, virDomainObjPtr vm)
+ {
+- struct libxlShutdownThreadInfo *shutdown_info = opaque;
+- virDomainObjPtr vm = NULL;
+- libxl_event *ev = shutdown_info->event;
+- libxlDriverPrivatePtr driver = shutdown_info->driver;
+ virObjectEventPtr dom_event = NULL;
+- g_autoptr(libxlDriverConfig) cfg = libxlDriverConfigGet(driver);
+- libxlDomainObjPrivatePtr priv;
+-
+- vm = virDomainObjListFindByID(driver->domains, ev->domid);
+- if (!vm) {
+- /* vm->def->id already cleared, means the death was handled by the
+- * driver already */
+- goto cleanup;
+- }
+-
+- priv = vm->privateData;
++ libxlDomainObjPrivatePtr priv = vm->privateData;
+
+ if (priv->ignoreDeathEvent) {
+ priv->ignoreDeathEvent = false;
+- goto cleanup;
++ return;
+ }
+
+ if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0)
+- goto cleanup;
++ return;
+
+ virDomainObjSetState(vm, VIR_DOMAIN_SHUTOFF,
VIR_DOMAIN_SHUTOFF_DESTROYED);
+ dom_event = virDomainEventLifecycleNewFromObj(vm,
+@@ -651,12 +632,7 @@ libxlDomainDeathThread(void *opaque)
+ if (!vm->persistent)
+ virDomainObjListRemove(driver->domains, vm);
+ libxlDomainObjEndJob(driver, vm);
+-
+- cleanup:
+- virDomainObjEndAPI(&vm);
+ virObjectEventStateQueue(driver->domainEventState, dom_event);
+- libxl_event_free(cfg->ctx, ev);
+- VIR_FREE(shutdown_info);
+ }
+
+
+@@ -668,16 +644,13 @@ libxlDomainEventHandler(void *data,
VIR_LIBXL_EVENT_CONST libxl_event *event)
+ {
+ libxlDriverPrivatePtr driver = data;
+ libxl_shutdown_reason xl_reason =
event->u.domain_shutdown.shutdown_reason;
+- struct libxlShutdownThreadInfo *shutdown_info = NULL;
+- virThread thread;
++ virDomainObjPtr vm = NULL;
+ g_autoptr(libxlDriverConfig) cfg = NULL;
+- int ret = -1;
+- g_autofree char *name = NULL;
+
+ if (event->type != LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN &&
+ event->type != LIBXL_EVENT_TYPE_DOMAIN_DEATH) {
+ VIR_INFO("Unhandled event type %d", event->type);
+- goto error;
++ goto cleanup;
+ }
+
+ /*
+@@ -685,42 +658,63 @@ libxlDomainEventHandler(void *data,
VIR_LIBXL_EVENT_CONST libxl_event *event)
+ * after calling libxl_domain_suspend() are handled by its callers.
+ */
+ if (xl_reason == LIBXL_SHUTDOWN_REASON_SUSPEND)
+- goto error;
++ goto cleanup;
++
++ vm = virDomainObjListFindByID(driver->domains, event->domid);
++ if (!vm) {
++ /* Nothing to do if we can't find the virDomainObj */
++ goto cleanup;
++ }
++
++ if (event->type == LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
++ libxlDomainObjPrivatePtr priv = vm->privateData;
++ struct libxlShutdownThreadInfo *shutdown_info = NULL;
++ virThread thread;
++ g_autofree char *name = NULL;
+
+- /*
+- * Start a thread to handle shutdown. We don't want to be tying up
+- * libxl's event machinery by doing a potentially lengthy shutdown.
+- */
+- shutdown_info = g_new0(struct libxlShutdownThreadInfo, 1);
+-
+- shutdown_info->driver = driver;
+- shutdown_info->event = (libxl_event *)event;
+- name = g_strdup_printf("ev-%d", event->domid);
+- if (event->type == LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN)
+- ret = virThreadCreateFull(&thread, false, libxlDomainShutdownThread,
+- name, false, shutdown_info);
+- else if (event->type == LIBXL_EVENT_TYPE_DOMAIN_DEATH)
+- ret = virThreadCreateFull(&thread, false, libxlDomainDeathThread,
+- name, false, shutdown_info);
+-
+- if (ret < 0) {
+ /*
+- * Not much we can do on error here except log it.
++ * Start a thread to handle shutdown. We don't want to be tying up
++ * libxl's event machinery by doing a potentially lengthy shutdown.
+ */
+- VIR_ERROR(_("Failed to create thread to handle domain shutdown"));
+- goto error;
+- }
++ shutdown_info = g_new0(struct libxlShutdownThreadInfo, 1);
+
+- /*
+- * libxlShutdownThreadInfo and libxl_event are freed in shutdown thread
+- */
+- return;
++ shutdown_info->driver = driver;
++ shutdown_info->vm = vm;
++ shutdown_info->event = (libxl_event *)event;
++ name = g_strdup_printf("ev-%d", event->domid);
++ /*
++ * Cleanup will be handled by the shutdown thread.
++ * Ignore the forthcoming death event from libxl
++ */
++ priv->ignoreDeathEvent = true;
++ if (virThreadCreateFull(&thread, false, libxlDomainShutdownThread,
++ name, false, shutdown_info) < 0) {
++ priv->ignoreDeathEvent = false;
++ /*
++ * Not much we can do on error here except log it.
++ */
++ VIR_ERROR(_("Failed to create thread to handle domain shutdown"));
++ VIR_FREE(shutdown_info);
++ goto cleanup;
++ }
++ /*
++ * virDomainObjEndAPI is called in the shutdown thread, where
++ * libxlShutdownThreadInfo and libxl_event are also freed.
++ */
++ return;
++ } else if (event->type == LIBXL_EVENT_TYPE_DOMAIN_DEATH) {
++ /*
++ * On death the domain is cleaned up from Xen's perspective.
++ * Cleanup on the libvirt side can be done synchronously.
++ */
++ libxlDomainHandleDeath(driver, vm);
++ }
+
+- error:
++ cleanup:
++ virDomainObjEndAPI(&vm);
+ cfg = libxlDriverConfigGet(driver);
+ /* Cast away any const */
+ libxl_event_free(cfg->ctx, (libxl_event *)event);
+- VIR_FREE(shutdown_info);
+ }
+
+ char *
diff -Nru libvirt-7.0.0/debian/patches/series
libvirt-7.0.0/debian/patches/series
--- libvirt-7.0.0/debian/patches/series 2023-02-06 17:49:16.000000000 +0100
+++ libvirt-7.0.0/debian/patches/series 2024-07-30 21:35:28.000000000 +0200
@@ -15,3 +15,17 @@
backport/vircgroup-Fix-virCgroupKillRecursive-wrt-nested-controlle.patch
backport/tests-Fix-libxlxml2domconfigtest-with-latest-xen.patch
backport/tests-Fix-libxlxml2domconfigtest.patch
+CVE-2021-3631.patch
+CVE-2021-3667.patch
+CVE-2021-3975.patch
+libxl-Fix-domain-shutdown.patch
+CVE-2021-4147_1.patch
+CVE-2021-4147_2.patch
+CVE-2021-4147_3.patch
+CVE-2021-4147_4.patch
+CVE-2021-4147_5.patch
+CVE-2021-4147_6.patch
+CVE-2022-0897.patch
+CVE-2024-1441.patch
+CVE-2024-2496.patch
+CVE-2024-2494.patch
signature.asc
Description: PGP signature
--- End Message ---