Re: [PATCH v8 00/18] *** qemu: block: Support block disk along with throttle filters ***

2025-03-12 Thread Harikumar Rajkumar
Hi Team, Could we please get an ETA on when the review will be completed? We 
would greatly appreciate it if the review could be completed as soon as 
possible. Apologies for the urgency, but we have a dependency on these changes.

Thank you for your understanding!

Best regards,
Harikumar R


Re: [PATCH v1] tests: update capabilities for QEMU 9.2.0 on s390x

2025-03-12 Thread Shalini Chellathurai Saroja

On 2025-03-11 14:03, Peter Krempa wrote:
On Tue, Mar 11, 2025 at 13:49:10 +0100, Shalini Chellathurai Saroja 
wrote:

Update the replies and xml files for QEMU 9.2.0 on s390x based on
the released QEMU tag v9.2.0 with commit Id
ae35f033b874c627d81d51070187fbf55f0bf1a7.

Signed-off-by: Shalini Chellathurai Saroja 
Reviewed-by: Boris Fiuczynski 
---
 .../caps_9.2.0_s390x.replies  | 3496 
+

 .../qemucapabilitiesdata/caps_9.2.0_s390x.xml |8 +-
 2 files changed, 1824 insertions(+), 1680 deletions(-)


Thanks! With this commit we no longer have any not up-to-date caps
(obviously except for the currently in-dev version).

I'll push the patch shortly

Reviewed-by: Peter Krempa 


Hello Peter,

Thank you very much.

--
Mit freundlichen Grüßen / Kind regards
Shalini Chellathurai Saroja
Software Developer
Linux on IBM Z & KVM Development
IBM Deutschland Research & Development GmbH
Dept 1419, Schoenaicher Str. 220, 71032 Boeblingen
Vorsitzender des Aufsichtsrats: Wolfgang Wendt
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht 
Stuttgart, HRB 243294


Re: [PATCH v8 00/18] *** qemu: block: Support block disk along with throttle filters ***

2025-03-12 Thread Peter Krempa
On Wed, Mar 12, 2025 at 08:15:12 -, Harikumar Rajkumar wrote:
> Hi Team, Could we please get an ETA on when the review will be completed?

No, not really. In the patches I went through I updated the docs to
mention 11.2.0. I hope I will not have to update it further.

It depends on how the rest will look but the first patches required
modification besides fixing version numbers.

> We would greatly appreciate it if the review could be completed as soon as 
> possible. Apologies for the urgency, but we have a dependency on these 
> changes.

Please understand that this is a niche feature that entails a lot of
code. Based on your interaction with upstream it is also very likely,
once merged, will be inherited for us to maintain.

As review and maintenance is something that most projects struggle with
the best way to incentivze an upstream to merge a niche complex feature
is to contribute also manitenance and review resources to the projec to
offset the maintenance burden of the code you are proposing to merge.

> Thank you for your understanding!

Likewise I'd like to thank you for understanding the two paragraphs
above.


Re: [PATCH v8 00/18] *** qemu: block: Support block disk along with throttle filters ***

2025-03-12 Thread Harikumar Rajkumar
Thank you for the update. I understand that the review process is complex, and 
the feature potentially requiring ongoing maintenance. I appreciate the effort 
and work that has been invested so far.


Re: [PATCH v8 08/18] qemu: Refactor qemuDomainSetBlockIoTune to extract common methods

2025-03-12 Thread Peter Krempa
On Wed, Feb 19, 2025 at 22:27:12 +0530, Harikumar Rajkumar wrote:
> From: Chun Feng Wu 
> 
> extract common methods from "qemuDomainSetBlockIoTune" to be reused
> by throttle handling later, common methods include:
> * "qemuDomainValidateBlockIoTune", which is to validate that PARAMS
>   contains only recognized parameter names with correct types
> * "qemuDomainSetBlockIoTuneFields", which is to load parameters into
>   internal object virDomainBlockIoTuneInfo
> * "qemuDomainCheckBlockIoTuneMutualExclusion", which is to check rules
>   like "total and read/write of bytes_sec cannot be set at the same time"
> * "qemuDomainCheckBlockIoTuneMax", which is to check "max" rules within iotune
> 
> Signed-off-by: Chun Feng Wu 
> 
> * Apply suggested coding style changes.
> 
> Signed-off-by: Harikumar Rajkumar 
> ---
>  src/qemu/qemu_driver.c | 229 +
>  1 file changed, 141 insertions(+), 88 deletions(-)
> 
> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 1d0da1028f..31c543175f 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -14988,35 +14988,8 @@ qemuDomainCheckBlockIoTuneReset(virDomainDiskDef 
> *disk,
>  
>  
>  static int
> -qemuDomainSetBlockIoTune(virDomainPtr dom,
> - const char *path,
> - virTypedParameterPtr params,
> - int nparams,
> - unsigned int flags)
> +qemuDomainValidateBlockIoTune(virTypedParameterPtr params, int nparams)

With formatting of this fixed

Reviewed-by: Peter Krempa 


[PATCH 1/2] Remove libnl checks specific to Linux

2025-03-12 Thread Akihiko Odaki
Builds for Linux without libnl is broken since commit
582f0966f9b9e2148d8887d072364e2a91aed000 because
src/util/virnetdevbridge.c refers to virNetlinkBridgeVlanFilterSet().

A fundamental problem here is that nobody tests such builds. Netlink is
a critical part of Linux networking so developers may assume it is
readily available, but the libnl dependency check breaks this
assumption.

Builds without libnl also lack several networking features such as
SR-IOV, bridge/tap, switchdev, macvlan, veth, and vlan. This may also
contradict with exceptations made by other networking code of libvirt or
users.

Remove libnl checks specific to Linux to avoid such a problem in
libvirt's networking code. More concretely, find patterns like
following:
  #ifdef __linux__
  A
  #ifdef WITH_LIBNL
  B
  #endif
  C
  #endif

And replace them with:
  #ifdef WITH_LIBNL
  A
  B
  C
  #endif

There are also patterns like following:
  #ifdef WITH_LIBNL
  A
  #elif defined(__linux__)
  B
  #endif

Replace them with:
  #ifdef WITH_LIBNL
  A
  #endif

This change will make less networking features available. Anyone who
want these features should enable libnl for the better tested code.

It still does not require libnl to build because the client code and
some drivers do not need libvirt to configure networking at all.

Signed-off-by: Akihiko Odaki 
---
 src/util/virnetdev.c   | 44 
 src/util/virnetdevbridge.c | 33 +
 tests/virnetdevtest.c  | 10 +-
 3 files changed, 10 insertions(+), 77 deletions(-)

diff --git a/src/util/virnetdev.c b/src/util/virnetdev.c
index 8ae854245e..7ea029be9e 100644
--- a/src/util/virnetdev.c
+++ b/src/util/virnetdev.c
@@ -914,29 +914,6 @@ virNetDevGetMaster(const char *ifname, char **master)
 return 0;
 }
 
-#elif defined(__linux__)
-
-/* libnl isn't available, so we can't use netlink.
- * Fall back to using sysfs
- */
-int
-virNetDevGetMaster(const char *ifname, char **master)
-{
-g_autofree char *path = NULL;
-g_autofree char *canonical = NULL;
-
-if (virNetDevSysfsFile(&path, ifname, "master") < 0)
-return -1;
-
-if (!(canonical = virFileCanonicalizePath(path)))
-return -1;
-
-*master = g_path_get_basename(canonical);
-
-VIR_DEBUG("IFLA_MASTER for %s is %s", ifname, *master ? *master : 
"(none)");
-return 0;
-}
-
 #else
 
 int
@@ -1059,7 +1036,7 @@ int virNetDevValidateConfig(const char *ifname 
G_GNUC_UNUSED,
 #endif
 
 
-#ifdef __linux__
+#ifdef WITH_LIBNL
 
 int
 virNetDevSysfsFile(char **pf_sysfs_device_link, const char *ifname,
@@ -1079,8 +1056,6 @@ virNetDevSysfsDeviceFile(char **pf_sysfs_device_link, 
const char *ifname,
 }
 
 
-# if defined(WITH_LIBNL)
-
 /**
  * Determine if the device path specified in devpath is a PCI Device
  * by resolving the 'subsystem'-link in devpath and looking for
@@ -1133,7 +1108,6 @@ virNetDevGetPCIDevice(const char *devName)
 
 return virPCIDeviceNew(vfPCIAddr);
 }
-# endif
 
 
 /* A wrapper to get content of file from ifname SYSFS_NET_DIR
@@ -1381,7 +1355,7 @@ virNetDevGetVirtualFunctionInfo(const char *vfname, char 
**pfname,
 return ret;
 }
 
-#else /* !__linux__ */
+#else /* !WITH_LIBNL */
 int
 virNetDevGetPhysPortID(const char *ifname G_GNUC_UNUSED,
char **physPortID)
@@ -1471,7 +1445,7 @@ virNetDevSysfsFile(char **pf_sysfs_device_link 
G_GNUC_UNUSED,
 }
 
 
-#endif /* !__linux__ */
+#endif /* !WITH_LIBNL */
 #if defined(WITH_LIBNL)
 
 
@@ -2938,7 +2912,7 @@ int virNetDevGetRxFilter(const char *ifname,
 return ret;
 }
 
-#if __linux__
+#ifdef WITH_LIBNL
 
 /**
  * virNetDevRDMAFeature
@@ -3095,8 +3069,6 @@ virNetDevGetEthtoolFeatures(const char *ifname,
 }
 
 
-# if defined(WITH_LIBNL)
-
 /**
  * virNetDevGetFamilyId:
  * This function supplies the devlink family id
@@ -3237,14 +3209,6 @@ virNetDevSwitchdevFeature(const char *ifname,
 nlmsg_free(nl_msg);
 return ret;
 }
-# else
-static int
-virNetDevSwitchdevFeature(const char *ifname G_GNUC_UNUSED,
-  virBitmap **out G_GNUC_UNUSED)
-{
-return 0;
-}
-# endif
 
 
 /**
diff --git a/src/util/virnetdevbridge.c b/src/util/virnetdevbridge.c
index c79d0c79b7..0af7a8d286 100644
--- a/src/util/virnetdevbridge.c
+++ b/src/util/virnetdevbridge.c
@@ -30,9 +30,7 @@
 #endif
 
 #ifdef __linux__
-# if defined(WITH_LIBNL)
-#  include "virnetlink.h"
-# endif
+# include "virnetlink.h"
 # include 
 # include  /* HZ */
 # include 
@@ -185,7 +183,7 @@ static int virNetDevBridgeGet(const char *brname,
 }
 #endif /* __linux__ */
 
-#if defined(__linux__)
+#if defined(WITH_LIBNL)
 static int
 virNetDevBridgePortSet(const char *brname,
const char *ifname,
@@ -450,7 +448,7 @@ virNetDevBridgePortSetIsolated(const char *brname 
G_GNUC_UNUSED,
  *
  * Returns 0 in case of success or -1 on failure
  */
-#if defined(WITH_STRUCT_IFREQ) && defined(SIOCBRADDBR)
+#if defined(WITH_STRUCT_IFREQ) && 

[PATCH 16/17] qemu: migration: Always assume support for QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING

2025-03-12 Thread Peter Krempa
The 'transform' attribute of 'bitmaps' was added in qemu-6.0, thus
we can assume all qemus we're willing to use support it.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_migration.c | 10 ++
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index e1399806d5..02ba35dc59 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -2681,7 +2681,6 @@ qemuMigrationSrcBeginXML(virDomainObj *vm,
 return NULL;

 if (cookieFlags & QEMU_MIGRATION_COOKIE_NBD &&
-virQEMUCapsGet(priv->qemuCaps, 
QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING) &&
 qemuMigrationSrcBeginPhaseBlockDirtyBitmaps(mig, vm, migrate_disks) < 
0)
 return NULL;

@@ -3188,15 +3187,13 @@ 
qemuMigrationDstPrepareAnyBlockDirtyBitmaps(virDomainObj *vm,
 qemuMigrationParams *migParams,
 unsigned int flags)
 {
-qemuDomainObjPrivate *priv = vm->privateData;
 g_autoptr(virJSONValue) mapping = NULL;
 g_autoptr(GHashTable) blockNamedNodeData = NULL;
 GSList *nextdisk;

 if (!mig->nbd ||
 !mig->blockDirtyBitmaps ||
-!(flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_INC)) 
||
-!virQEMUCapsGet(priv->qemuCaps, 
QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING))
+!(flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_INC)))
 return 0;

 if (qemuMigrationCookieBlockDirtyBitmapsMatchDisks(vm->def, 
mig->blockDirtyBitmaps) < 0)
@@ -4936,10 +4933,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver,

 if (storageMigration) {
 cookieFlags |= QEMU_MIGRATION_COOKIE_NBD;
-
-if (virQEMUCapsGet(priv->qemuCaps,
-   QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING))
-cookieFlags |= QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMAPS;
+cookieFlags |= QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMAPS;
 }

 if (virLockManagerPluginUsesState(driver->lockManager) &&
-- 
2.48.1


[PATCH 17/17] qemu: capabilites: Retire QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING

2025-03-12 Thread Peter Krempa
The capability is no logner used as all qemus support it.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_capabilities.c | 3 +--
 src/qemu/qemu_capabilities.h | 2 +-
 tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml| 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml| 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_armv7l.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_loongarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_sparc.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_aarch64+hvf.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64+amdsev.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 1 -
 31 files changed, 2 insertions(+), 32 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 4e292053c9..d13b4c8109 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -616,7 +616,7 @@ VIR_ENUM_IMPL(virQEMUCaps,
   "cpu-max", /* QEMU_CAPS_CPU_MAX */
   "memory-backend-file.x-use-canonical-path-for-ramblock-id", /* 
QEMU_CAPS_X_USE_CANONICAL_PATH_FOR_RAMBLOCK_ID */
   "vnc-opts", /* X_QEMU_CAPS_VNC_OPTS */
-  "migration-param.block-bitmap-mapping", /* 
QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING */
+  "migration-param.block-bitmap-mapping", /* 
X_QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING */

   /* 395 */
   "vnc-power-control", /* QEMU_CAPS_VNC_POWER_CONTROL */
@@ -1578,7 +1578,6 @@ static struct virQEMUCapsStringFlags 
virQEMUCapsQMPSchemaQueries[] = {
 { "chardev-add/arg-type/backend/+socket/data/reconnect-ms", 
QEMU_CAPS_CHARDEV_RECONNECT_MILISECONDS },
 { "chardev-add/arg-type/backend/+qemu-vdagent", 
QEMU_CAPS_CHARDEV_QEMU_VDAGENT },
 { "device_add/$json-cli-hotplug", QEMU_CAPS_DEVICE_JSON },
-{ 
"migrate-set-parameters/arg-type/block-bitmap-mapping/bitmaps/transform", 
QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING },
 { "nbd-server-start/arg-type/tls-creds", QEMU_CAPS_NBD_TLS },
 { "nbd-server-add/arg-type/bitmap", QEMU_CAPS_NBD_BITMAP },
 { "netdev_add/arg-type/+stream", QEMU_CAPS_NETDEV_STREAM },
diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index f775735c29..e836d107aa 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -595,7 +595,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for 
syntax-check */
 QEMU_CAPS_CPU_MAX, /* -cpu max */
 QEMU_CAPS_X_USE_CANONICAL_PATH_FOR_RAMBLOCK_ID, /* -object 
memory-backend-file,x-use-canonical-path-for-ramblock-id= */
 X_QEMU_CAPS_VNC_OPTS, /* -vnc uses QemuOpts parser instead of custom code 
*/
-QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING, /* block-bitmap-mapping in 
migrate-set-parameters */
+X_QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING, /* block-bitmap-mapping 
in migrate-set-parameters */

 /* 395 */
 QEMU_CAPS_VNC_POWER_CONTROL, /* -vnc power-control option */
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
index 33be470dce..0600cbda62 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
@@ -78,7 +78,6 @@
   
   
   
-  
   
   
   
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
index 8a2db4b408..ad7bb7ebc8 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amd

[PATCH 04/17] qemu: monitor: Drop support for extra wrapper for 'object_add'

2025-03-12 Thread Peter Krempa
The QAPIfication of objects removed the extra warapper object which we
were adding in the monitor code to simplify the other callers.

Now that we support only qemus which don't require this we can drop the
support code.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_monitor.c  | 27 +--
 src/qemu/qemu_monitor_priv.h |  2 --
 2 files changed, 1 insertion(+), 28 deletions(-)

diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index c069d17265..724e82e8a4 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -600,7 +600,6 @@ qemuMonitorOpenInternal(virDomainObj *vm,
 mon->cb = cb;

 if (priv) {
-mon->objectAddNoWrap = virQEMUCapsGet(priv->qemuCaps, 
QEMU_CAPS_OBJECT_JSON);
 mon->queryNamedBlockNodesFlat = virQEMUCapsGet(priv->qemuCaps, 
QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT);
 mon->blockjobMaskProtocol = virQEMUCapsGet(priv->qemuCaps, 
QEMU_CAPS_BLOCKJOB_BACKING_MASK_PROTOCOL);
 }
@@ -2660,7 +2659,6 @@ qemuMonitorAddObject(qemuMonitor *mon,
  virJSONValue **props,
  char **alias)
 {
-g_autoptr(virJSONValue) pr = NULL;
 const char *type = NULL;
 const char *id = NULL;
 g_autofree char *aliasCopy = NULL;
@@ -2688,30 +2686,7 @@ qemuMonitorAddObject(qemuMonitor *mon,
 if (alias)
 aliasCopy = g_strdup(id);

-if (mon->objectAddNoWrap) {
-pr = g_steal_pointer(props);
-} else {
-/* we need to create a wrapper which has the 'qom-type' and 'id' and
- * store everything else under a 'props' sub-object */
-g_autoptr(virJSONValue) typeobj = NULL;
-g_autoptr(virJSONValue) idobj = NULL;
-
-ignore_value(virJSONValueObjectRemoveKey(*props, "qom-type", 
&typeobj));
-ignore_value(virJSONValueObjectRemoveKey(*props, "id", &idobj));
-
-/* avoid empty 'props' member */
-if (!virJSONValueObjectGetKey(*props, 0))
-g_clear_pointer(props, virJSONValueFree);
-
-if (virJSONValueObjectAdd(&pr,
-  "s:qom-type", type,
-  "s:id", id,
-  "A:props", props,
-  NULL) < 0)
-return -1;
-}
-
-if (qemuMonitorJSONAddObject(mon, &pr) < 0)
+if (qemuMonitorJSONAddObject(mon, props) < 0)
 return -1;

 if (alias)
diff --git a/src/qemu/qemu_monitor_priv.h b/src/qemu/qemu_monitor_priv.h
index 0c2098c456..8cb5e2c3a4 100644
--- a/src/qemu/qemu_monitor_priv.h
+++ b/src/qemu/qemu_monitor_priv.h
@@ -88,8 +88,6 @@ struct _qemuMonitor {
 void *logOpaque;
 virFreeCallback logDestroy;

-/* true if qemu no longer wants 'props' sub-object of object-add */
-bool objectAddNoWrap;
 /* query-named-block-nodes supports the 'flat' option */
 bool queryNamedBlockNodesFlat;
 /* use the backing-mask-protocol flag of block-commit/stream */
-- 
2.48.1


[PATCH 12/17] qemu: capabilities: Retire QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY

2025-03-12 Thread Peter Krempa
All supported qemus have this and we already deleted alternate code.
Retire the feature flag.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_capabilities.c | 3 +--
 src/qemu/qemu_capabilities.h | 2 +-
 tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml| 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml| 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_armv7l.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_loongarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_sparc.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_aarch64+hvf.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64+amdsev.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 1 -
 31 files changed, 2 insertions(+), 32 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 82e7dc5ccc..615efbf021 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -565,7 +565,7 @@ VIR_ENUM_IMPL(virQEMUCaps,
   /* 355 */
   "vhost-user-fs", /* QEMU_CAPS_DEVICE_VHOST_USER_FS */
   "query-named-block-nodes.flat", /* 
X_QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT */
-  "blockdev-snapshot.allow-write-only-overlay", /* 
QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY */
+  "blockdev-snapshot.allow-write-only-overlay", /* 
X_QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY */
   "blockdev-reopen", /* X_QEMU_CAPS_BLOCKDEV_REOPEN */
   "storage.werror", /* X_QEMU_CAPS_STORAGE_WERROR */

@@ -1573,7 +1573,6 @@ static struct virQEMUCapsStringFlags 
virQEMUCapsQMPSchemaQueries[] = {
 { "blockdev-add/arg-type/+nbd/tls-hostname", 
QEMU_CAPS_BLOCKDEV_NBD_TLS_HOSTNAME },
 { "blockdev-add/arg-type/+qcow2/discard-no-unref", 
QEMU_CAPS_QCOW2_DISCARD_NO_UNREF },
 { "blockdev-add/arg-type/+virtio-blk-vhost-vdpa/$fdset", 
QEMU_CAPS_DEVICE_VIRTIO_BLK_VHOST_VDPA},
-{ "blockdev-snapshot/$allow-write-only-overlay", 
QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY },
 { "calc-dirty-rate/arg-type/mode", QEMU_CAPS_DIRTYRATE_MODE },
 { "chardev-add/arg-type/backend/+socket/data/reconnect", 
QEMU_CAPS_CHARDEV_RECONNECT },
 { "chardev-add/arg-type/backend/+socket/data/reconnect-ms", 
QEMU_CAPS_CHARDEV_RECONNECT_MILISECONDS },
diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index 620289572e..4a865b13de 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -544,7 +544,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for 
syntax-check */
 /* 355 */
 QEMU_CAPS_DEVICE_VHOST_USER_FS, /* -device vhost-user-fs */
 X_QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT, /* query-named-block-nodes 
supports the 'flat' option */
-QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY, /* blockdev-snapshot has the 
'allow-write-only-overlay' feature */
+X_QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY, /* blockdev-snapshot has 
the 'allow-write-only-overlay' feature */
 X_QEMU_CAPS_BLOCKDEV_REOPEN, /* 'blockdev-reopen' qmp command is supported 
*/
 X_QEMU_CAPS_STORAGE_WERROR, /* virtio-blk,scsi-hd.werror */

diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
index 3f34d4853c..2dede610ef 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
@@ -67,7 +67,6 @@
   
   
   
-  
   
   
   
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
index db

[PATCH 14/17] qemu: capabilites: Retire QEMU_CAPS_INCREMENTAL_BACKUP

2025-03-12 Thread Peter Krempa
All supported qemu versions now support this. Retire the capability.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_capabilities.c | 6 ++
 src/qemu/qemu_capabilities.h | 2 +-
 tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml| 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml| 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_armv7l.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_loongarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_sparc.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_aarch64+hvf.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64+amdsev.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 1 -
 31 files changed, 3 insertions(+), 34 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 117648f03f..4e292053c9 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -542,7 +542,7 @@ VIR_ENUM_IMPL(virQEMUCaps,
   "vhost-user-vga", /* QEMU_CAPS_DEVICE_VHOST_USER_VGA */

   /* 340 */
-  "incremental-backup", /* QEMU_CAPS_INCREMENTAL_BACKUP */
+  "incremental-backup", /* X_QEMU_CAPS_INCREMENTAL_BACKUP */
   "query-cpu-model-baseline", /* 
QEMU_CAPS_QUERY_CPU_MODEL_BASELINE */
   "query-cpu-model-comparison", /* 
QEMU_CAPS_QUERY_CPU_MODEL_COMPARISON */
   "ramfb", /* QEMU_CAPS_DEVICE_RAMFB */
@@ -5590,10 +5590,8 @@ virQEMUCapsInitQMPVersionCaps(virQEMUCaps *qemuCaps 
G_GNUC_UNUSED)
  * for libvirt to be able to drive it properly should be processed here.
  */
 void
-virQEMUCapsInitProcessCapsInterlock(virQEMUCaps *qemuCaps)
+virQEMUCapsInitProcessCapsInterlock(virQEMUCaps *qemuCaps G_GNUC_UNUSED)
 {
-if (virQEMUCapsGet(qemuCaps, 
QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING))
-virQEMUCapsSet(qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP);
 }


diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index 4a865b13de..f775735c29 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -521,7 +521,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for 
syntax-check */
 QEMU_CAPS_DEVICE_VHOST_USER_VGA, /* -device vhost-user-vga */

 /* 340 */
-QEMU_CAPS_INCREMENTAL_BACKUP, /* incremental backup is supported */
+X_QEMU_CAPS_INCREMENTAL_BACKUP, /* incremental backup is supported */
 QEMU_CAPS_QUERY_CPU_MODEL_BASELINE, /* qmp query-cpu-model-baseline */
 QEMU_CAPS_QUERY_CPU_MODEL_COMPARISON, /* qmp query-cpu-model-comparison */
 QEMU_CAPS_DEVICE_RAMFB, /* -device ramfb */
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
index 2dede610ef..33be470dce 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
@@ -60,7 +60,6 @@
   
   
   
-  
   
   
   
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
index 29f82b8a0c..8a2db4b408 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
@@ -117,7 +117,6 @@
   
   
   
-  
   
   
   
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml
index d26fd44571..4bc385a402 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml
@@ -117,7 +117,6 @@
   
   
   
-  
   
   
   
diff --git a/tests/qemuca

[PATCH 08/17] qemu: capabilities: Retire QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT

2025-03-12 Thread Peter Krempa
The capability is no longer used as all qemus already support the
feature.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_capabilities.c | 3 +--
 src/qemu/qemu_capabilities.h | 2 +-
 tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml| 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml| 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_armv7l.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_loongarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_sparc.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_aarch64+hvf.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64+amdsev.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 1 -
 31 files changed, 2 insertions(+), 32 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index b6eb923763..f0c48bb2be 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -564,7 +564,7 @@ VIR_ENUM_IMPL(virQEMUCaps,

   /* 355 */
   "vhost-user-fs", /* QEMU_CAPS_DEVICE_VHOST_USER_FS */
-  "query-named-block-nodes.flat", /* 
QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT */
+  "query-named-block-nodes.flat", /* 
X_QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT */
   "blockdev-snapshot.allow-write-only-overlay", /* 
QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY */
   "blockdev-reopen", /* QEMU_CAPS_BLOCKDEV_REOPEN */
   "storage.werror", /* X_QEMU_CAPS_STORAGE_WERROR */
@@ -1599,7 +1599,6 @@ static struct virQEMUCapsStringFlags 
virQEMUCapsQMPSchemaQueries[] = {
 { "query-hotpluggable-cpus/ret-type/props/die-id", QEMU_CAPS_SMP_DIES },
 { "query-hotpluggable-cpus/ret-type/props/cluster-id", 
QEMU_CAPS_SMP_CLUSTERS },
 { "query-migrate/ret-type/blocked-reasons", 
QEMU_CAPS_MIGRATION_BLOCKED_REASONS },
-{ "query-named-block-nodes/arg-type/flat", 
QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT },
 { "screendump/arg-type/device", QEMU_CAPS_SCREENDUMP_DEVICE },
 { "screendump/arg-type/format/^png", QEMU_CAPS_SCREENSHOT_FORMAT_PNG },
 { "set-numa-node/arg-type/+hmat-lb", QEMU_CAPS_NUMA_HMAT },
diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index 4e247193e3..1fedfdaeb1 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -543,7 +543,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for 
syntax-check */

 /* 355 */
 QEMU_CAPS_DEVICE_VHOST_USER_FS, /* -device vhost-user-fs */
-QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT, /* query-named-block-nodes 
supports the 'flat' option */
+X_QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT, /* query-named-block-nodes 
supports the 'flat' option */
 QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY, /* blockdev-snapshot has the 
'allow-write-only-overlay' feature */
 QEMU_CAPS_BLOCKDEV_REOPEN, /* 'blockdev-reopen' qmp command is supported */
 X_QEMU_CAPS_STORAGE_WERROR, /* virtio-blk,scsi-hd.werror */
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
index 041b480b21..a35578fab4 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
@@ -67,7 +67,6 @@
   
   
   
-  
   
   
   
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
index 3adef15db3..6be2a75358 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
@@ -

Re: [PATCH 04/17] qemu: monitor: Drop support for extra wrapper for 'object_add'

2025-03-12 Thread Daniel P . Berrangé
On Wed, Mar 12, 2025 at 05:31:14PM +0100, Peter Krempa wrote:
> The QAPIfication of objects removed the extra warapper object which we

s/warapper/wrapper/

> were adding in the monitor code to simplify the other callers.
> 
> Now that we support only qemus which don't require this we can drop the
> support code.

Reviewed-by: Daniel P. Berrangé 


With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|


[PATCH 03/17] qemuBuildObjectCommandlineFromJSON: Assume all qemus support QEMU_CAPS_OBJECT_JSON

2025-03-12 Thread Peter Krempa
'-object' was qapified (meaning it supports JSON props) in qemu-6.0,
thus now that we require qemu-6.2 we can drop the compatibility code.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_command.c | 19 +++
 1 file changed, 3 insertions(+), 16 deletions(-)

diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 2b0e3dd53a..51e428e017 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -185,25 +185,12 @@ qemuOnOffAuto(virTristateSwitch s)
 static int
 qemuBuildObjectCommandlineFromJSON(virCommand *cmd,
virJSONValue *props,
-   virQEMUCaps *qemuCaps)
+   virQEMUCaps *qemuCaps G_GNUC_UNUSED)
 {
 g_autofree char *arg = NULL;

-if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_OBJECT_JSON)) {
-if (!(arg = virJSONValueToString(props, false)))
-return -1;
-} else {
-const char *type = virJSONValueObjectGetString(props, "qom-type");
-g_auto(virBuffer) buf = VIR_BUFFER_INITIALIZER;
-
-virBufferAsprintf(&buf, "%s,", type);
-
-if (virQEMUBuildCommandLineJSON(props, &buf, "qom-type",
-
virQEMUBuildCommandLineJSONArrayBitmap) < 0)
-return -1;
-
-arg = virBufferContentAndReset(&buf);
-}
+if (!(arg = virJSONValueToString(props, false)))
+return -1;

 virCommandAddArgList(cmd, "-object", arg, NULL);
 return 0;
-- 
2.48.1


[PATCH 02/17] qemu: capabilities: Retire QEMU_CAPS_COMPAT_DEPRECATED

2025-03-12 Thread Peter Krempa
The capability always exists in qemu and is no longer checked.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_capabilities.c | 9 +
 src/qemu/qemu_capabilities.h | 2 +-
 tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml| 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml| 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_armv7l.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_loongarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_sparc.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_aarch64+hvf.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64+amdsev.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 1 -
 31 files changed, 2 insertions(+), 38 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 1262d4b39d..df9e630fbf 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -626,7 +626,7 @@ VIR_ENUM_IMPL(virQEMUCaps,
   "rotation-rate", /* QEMU_CAPS_ROTATION_RATE */

   /* 400 */
-  "compat-deprecated", /* QEMU_CAPS_COMPAT_DEPRECATED */
+  "compat-deprecated", /* X_QEMU_CAPS_COMPAT_DEPRECATED */
   "acpi-index", /* QEMU_CAPS_ACPI_INDEX */
   "input-linux", /* QEMU_CAPS_INPUT_LINUX */
   "virtio-gpu-gl-pci", /* QEMU_CAPS_VIRTIO_GPU_GL_PCI */
@@ -5599,13 +5599,6 @@ virQEMUCapsInitProcessCapsInterlock(virQEMUCaps 
*qemuCaps)
 if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_BLOCKDEV_REOPEN) &&
 virQEMUCapsGet(qemuCaps, 
QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING))
 virQEMUCapsSet(qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP);
-
-/* The -compat qemu command line argument is implemented using a newer
- * method which doesn't show up in query-command-line-options. As we'll use
- * it only for development and testing purposes we can base the capability
- * on a not entirely related witness. */
-if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_OBJECT_JSON))
-virQEMUCapsSet(qemuCaps, QEMU_CAPS_COMPAT_DEPRECATED);
 }


diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index 840cb97dbe..d4e5be6918 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -605,7 +605,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for 
syntax-check */
 QEMU_CAPS_ROTATION_RATE, /* scsi-disk / ide-drive rotation-rate prop */

 /* 400 */
-QEMU_CAPS_COMPAT_DEPRECATED, /* -compat deprecated-(input|output) is 
supported */
+X_QEMU_CAPS_COMPAT_DEPRECATED, /* -compat deprecated-(input|output) is 
supported */
 QEMU_CAPS_ACPI_INDEX, /* PCI device 'acpi-index' property */
 QEMU_CAPS_INPUT_LINUX, /* -object input-linux */
 QEMU_CAPS_VIRTIO_GPU_GL_PCI, /* -device virtio-gpu-gl-pci */
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
index e0ad72d5d4..3d4c715396 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
@@ -86,7 +86,6 @@
   
   
   
-  
   
   
   
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
index f8c3bbab2d..951e16dffb 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
@@ -150,7 +150,6 @@
   
   
   
-  
   
   
   
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml
index 3585d4bd8b..

[PATCH v2 04/22] hypervisor: custom shutdown actions for transient vs persistent VMs

2025-03-12 Thread Daniel P . Berrangé
It may be desirable to treat transient VMs differently from persistent
VMs. For example, while performing managed save on persistent VMs makes
sense, the same not usually true of transient VMs, since by their
nature they will have no config to restore from.

This also lets us fix a long standing problem with incorrectly
attempting to perform managed save on transient VMs.

Signed-off-by: Daniel P. Berrangé 
---
 src/hypervisor/domain_driver.c | 62 +++---
 src/hypervisor/domain_driver.h | 18 --
 src/libvirt_private.syms   |  2 ++
 src/qemu/qemu_driver.c |  6 ++--
 4 files changed, 77 insertions(+), 11 deletions(-)

diff --git a/src/hypervisor/domain_driver.c b/src/hypervisor/domain_driver.c
index e625726c07..c510e1d2ae 100644
--- a/src/hypervisor/domain_driver.c
+++ b/src/hypervisor/domain_driver.c
@@ -35,6 +35,13 @@
 
 VIR_LOG_INIT("hypervisor.domain_driver");
 
+VIR_ENUM_IMPL(virDomainDriverAutoShutdownScope,
+  VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_LAST,
+  "none",
+  "persistent",
+  "transient",
+  "all");
+
 char *
 virDomainDriverGenerateRootHash(const char *drivername,
 const char *root)
@@ -721,9 +728,13 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 int numDomains = 0;
 size_t i;
 virDomainPtr *domains = NULL;
+g_autofree bool *transient = NULL;
 
-VIR_DEBUG("Run autoshutdown uri=%s trySave=%d tryShutdown=%d poweroff=%d 
waitShutdownSecs=%u",
-  cfg->uri, cfg->trySave, cfg->tryShutdown, cfg->poweroff,
+VIR_DEBUG("Run autoshutdown uri=%s trySave=%s tryShutdown=%s poweroff=%s 
waitShutdownSecs=%u",
+  cfg->uri,
+  virDomainDriverAutoShutdownScopeTypeToString(cfg->trySave),
+  virDomainDriverAutoShutdownScopeTypeToString(cfg->tryShutdown),
+  virDomainDriverAutoShutdownScopeTypeToString(cfg->poweroff),
   cfg->waitShutdownSecs);
 
 /*
@@ -740,6 +751,21 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 if (cfg->waitShutdownSecs <= 0)
 cfg->waitShutdownSecs = 30;
 
+/* Short-circuit if all actions are disabled */
+if (cfg->trySave == VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE &&
+cfg->tryShutdown == VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE &&
+cfg->poweroff == VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE)
+return;
+
+if (cfg->trySave == VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_ALL ||
+cfg->trySave == VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_TRANSIENT) {
+/* virDomainManagedSave will return an error. We'll let
+ * the code run, since it'll just fall through to the next
+ * actions, but give a clear warning upfront */
+VIR_WARN("Managed save not supported for transient VMs");
+return;
+}
+
 if (!(conn = virConnectOpen(cfg->uri)))
 goto cleanup;
 
@@ -749,10 +775,22 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 goto cleanup;
 
 VIR_DEBUG("Auto shutdown with %d running domains", numDomains);
-if (cfg->trySave) {
+
+transient = g_new0(bool, numDomains);
+for (i = 0; i < numDomains; i++) {
+if (virDomainIsPersistent(domains[i]) == 0)
+transient[i] = true;
+}
+
+if (cfg->trySave != VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE) {
 g_autofree unsigned int *flags = g_new0(unsigned int, numDomains);
 for (i = 0; i < numDomains; i++) {
 int state;
+
+if ((transient[i] && cfg->trySave == 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_PERSISTENT) ||
+(!transient[i] && cfg->trySave == 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_TRANSIENT))
+continue;
+
 /*
  * Pause all VMs to make them stop dirtying pages,
  * so save is quicker. We remember if any VMs were
@@ -781,11 +819,16 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 }
 }
 
-if (cfg->tryShutdown) {
+if (cfg->tryShutdown != VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE) {
 GTimer *timer = NULL;
 for (i = 0; i < numDomains; i++) {
 if (domains[i] == NULL)
 continue;
+
+if ((transient[i] && cfg->tryShutdown == 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_PERSISTENT) ||
+(!transient[i] && cfg->tryShutdown == 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_TRANSIENT))
+continue;
+
 if (virDomainShutdown(domains[i]) < 0) {
 VIR_WARN("Unable to request graceful shutdown of '%s': %s",
  virDomainGetName(domains[i]),
@@ -801,6 +844,10 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 if (!domains[i])
 continue;
 
+if ((transient[i] && cfg->tryShutdown == 
VIR_DOMAIN_DRIVER_AU

[PATCH 09/17] qemu: Always assume supprot for QEMU_CAPS_BLOCKDEV_REOPEN

2025-03-12 Thread Peter Krempa
'blockdev-reopen' is supported since qemu-6.1, thus we can now remove
the interlocks.

Document the change to 'mirror' as this patch removes the last clue why
we overwrite the mirror's readonly state to false unconditionally.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_blockjob.c   | 11 +--
 src/qemu/qemu_checkpoint.c |  6 ++
 src/qemu/qemu_driver.c | 11 +++
 3 files changed, 6 insertions(+), 22 deletions(-)

diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c
index 3aef1ec285..be18f7b273 100644
--- a/src/qemu/qemu_blockjob.c
+++ b/src/qemu/qemu_blockjob.c
@@ -968,10 +968,6 @@ 
qemuBlockJobProcessEventCompletedCommitBitmaps(virDomainObj *vm,
 g_autoptr(virJSONValue) actions = NULL;
 bool active = job->type == QEMU_BLOCKJOB_TYPE_ACTIVE_COMMIT;

-if (!active &&
-!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_REOPEN))
-return 0;
-
 if (!(blockNamedNodeData = qemuBlockGetNamedNodeData(vm, asyncJob)))
 return -1;

@@ -1205,9 +1201,6 @@ qemuBlockJobProcessEventCompletedCopyBitmaps(virDomainObj 
*vm,
 g_autoptr(virJSONValue) actions = NULL;
 bool shallow = job->jobflags & VIR_DOMAIN_BLOCK_COPY_SHALLOW;

-if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_REOPEN))
-return 0;
-
 if (!(blockNamedNodeData = qemuBlockGetNamedNodeData(vm, asyncJob)))
 return -1;

@@ -1237,7 +1230,6 @@ qemuBlockJobProcessEventConcludedCopyPivot(virQEMUDriver 
*driver,
qemuBlockJobData *job,
virDomainAsyncJob asyncJob)
 {
-qemuDomainObjPrivate *priv = vm->privateData;
 g_autoptr(virStorageSource) src = NULL;

 VIR_DEBUG("copy job '%s' on VM '%s' pivoted", job->name, vm->def->name);
@@ -1257,8 +1249,7 @@ qemuBlockJobProcessEventConcludedCopyPivot(virQEMUDriver 
*driver,
 !virStorageSourceIsBacking(job->disk->mirror->backingStore))
 job->disk->mirror->backingStore = 
g_steal_pointer(&job->disk->src->backingStore);

-if (job->disk->src->readonly &&
-virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_REOPEN))
+if (job->disk->src->readonly)
 ignore_value(qemuBlockReopenReadOnly(vm, job->disk->mirror, asyncJob));

 qemuBlockJobRewriteConfigDiskSource(vm, job->disk, job->disk->mirror);
diff --git a/src/qemu/qemu_checkpoint.c b/src/qemu/qemu_checkpoint.c
index ca58da8fcb..b05aaa246e 100644
--- a/src/qemu/qemu_checkpoint.c
+++ b/src/qemu/qemu_checkpoint.c
@@ -225,8 +225,7 @@ qemuCheckpointDiscardBitmaps(virDomainObj *vm,
false, false, false) < 0)
 goto relabel;

-if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_REOPEN) &&
-qemuBlockReopenReadWrite(vm, src, VIR_ASYNC_JOB_NONE) < 0)
+if (qemuBlockReopenReadWrite(vm, src, VIR_ASYNC_JOB_NONE) < 0)
 goto relabel;

 relabelimages = g_slist_prepend(relabelimages, src);
@@ -240,8 +239,7 @@ qemuCheckpointDiscardBitmaps(virDomainObj *vm,
 for (next = relabelimages; next; next = next->next) {
 virStorageSource *src = next->data;

-if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_REOPEN))
-ignore_value(qemuBlockReopenReadOnly(vm, src, VIR_ASYNC_JOB_NONE));
+ignore_value(qemuBlockReopenReadOnly(vm, src, VIR_ASYNC_JOB_NONE));

 ignore_value(qemuDomainStorageSourceAccessAllow(driver, vm, src,
 true, false, false));
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 2364b4d312..ef731cb072 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -14260,14 +14260,9 @@ qemuDomainBlockCopyCommon(virDomainObj *vm,
  keepParentLabel) < 0)
 goto endjob;

-if (mirror->readonly) {
-if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_REOPEN)) {
-virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
-   _("copy of read-only disks is not supported"));
-goto endjob;
-}
-mirror->readonly = false;
-}
+/* In case we're copying a read-only disk we need to open the mirror image
+ * as read-write for the duration of the copy job */
+mirror->readonly = false;

 /* we must initialize XML-provided chain prior to detecting to keep 
semantics
  * with VM startup */
-- 
2.48.1


[PATCH v2 05/22] qemu: support automatic VM managed save in system daemon

2025-03-12 Thread Daniel P . Berrangé
Currently automatic VM managed save is only performed in session
daemons, on desktop session close, or host OS shutdown request.

With this change it is possible to control shutdown behaviour for
all daemons. A recommended setup might be:

  auto_shutdown_try_save = "persistent"
  auto_shutdown_try_shutdown = "all"
  auto_shutdown_poweroff = "all"

Each setting accepts 'none', 'persistent', 'transient', and 'all'
to control what types of guest it applies to.

For historical compatibility, for the system daemon, the settings
currently default to:

  auto_shutdown_try_save = "none"
  auto_shutdown_try_shutdown = "none"
  auto_shutdown_poweroff = "none"

while for the session daemon they currently default to

  auto_shutdown_try_save = "persistent"
  auto_shutdown_try_shutdown = "none"
  auto_shutdown_poweroff = "none"

The system daemon settings should NOT be enabled if the traditional
libvirt-guests.service is already enabled.

Signed-off-by: Daniel P. Berrangé 
---
 src/qemu/libvirtd_qemu.aug |  3 ++
 src/qemu/qemu.conf.in  | 42 +++
 src/qemu/qemu_conf.c   | 65 ++
 src/qemu/qemu_conf.h   |  4 ++
 src/qemu/qemu_driver.c |  9 ++---
 src/qemu/test_libvirtd_qemu.aug.in |  3 ++
 6 files changed, 121 insertions(+), 5 deletions(-)

diff --git a/src/qemu/libvirtd_qemu.aug b/src/qemu/libvirtd_qemu.aug
index 642093c40b..605604a01a 100644
--- a/src/qemu/libvirtd_qemu.aug
+++ b/src/qemu/libvirtd_qemu.aug
@@ -98,6 +98,9 @@ module Libvirtd_qemu =
  | bool_entry "auto_dump_bypass_cache"
  | bool_entry "auto_start_bypass_cache"
  | int_entry "auto_start_delay"
+ | str_entry "auto_shutdown_try_save"
+ | str_entry "auto_shutdown_try_shutdown"
+ | str_entry "auto_shutdown_poweroff"
 
let process_entry = str_entry "hugetlbfs_mount"
  | str_entry "bridge_helper"
diff --git a/src/qemu/qemu.conf.in b/src/qemu/qemu.conf.in
index 31172303dc..a674e11388 100644
--- a/src/qemu/qemu.conf.in
+++ b/src/qemu/qemu.conf.in
@@ -639,6 +639,48 @@
 #
 #auto_start_delay = 0
 
+# The settings for auto shutdown actions accept one of
+# four possible options:
+#
+# * "none" - do not try to save any running VMs
+# * "persistent" - only try to save persistent running VMs
+# * "transient" - only try to save transient running VMs
+# * "all" - try to save all running VMs
+#
+# Whether to perform managed save of running VMs if a host OS
+# shutdown is requested (system/session daemons), or the desktop
+# session terminates (session daemon only).
+#
+# Defaults to "persistent" for session daemons and "none"
+# for system daemons. The values "all" and "transient" are
+# not permitted for this setting, since managed save is not
+# implemented for transient VMs.
+#
+# If 'libvirt-guests.service' is enabled, then this must be
+# set to 'none' for system daemons to avoid dueling actions
+#auto_shutdown_try_save = "persistent"
+
+# As above, but with a graceful shutdown action instead of
+# managed save. If managed save is enabled, shutdown will
+# be tried only on failure to perform managed save.
+#
+# Defaults to "none"
+#
+# If 'libvirt-guests.service' is enabled, then this must be
+# set to 'none' for system daemons to avoid dueling actions
+#auto_shutdown_try_shutdown = "none"
+
+# As above, but with a forced poweroff instead of managed
+# save. If managed save or graceful shutdown are enabled,
+# forced poweroff will be tried only on failure of the
+# other options.
+#
+# Defaults to "none"
+#
+# If 'libvirt-guests.service' is enabled, then this must be
+# set to 'none' for system daemons to avoid dueling actions
+#auto_shutdown_poweroff = "none"
+
 # If provided by the host and a hugetlbfs mount point is configured,
 # a guest may request huge page backing.  When this mount point is
 # unspecified here, determination of a host mount point in /proc/mounts
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index b376841388..8554558201 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -304,6 +304,21 @@ virQEMUDriverConfig *virQEMUDriverConfigNew(bool 
privileged,
 cfg->dumpGuestCore = true;
 #endif
 
+if (privileged) {
+/*
+ * Defer to libvirt-guests.service.
+ *
+ * XXX, or query if libvirt-guests.service is enabled perhaps ?
+ */
+cfg->autoShutdownTrySave = VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE;
+cfg->autoShutdownTryShutdown = 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE;
+cfg->autoShutdownPoweroff = VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE;
+} else {
+cfg->autoShutdownTrySave = 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_PERSISTENT;
+cfg->autoShutdownTryShutdown = 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE;
+cfg->autoShutdownPoweroff = VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE;
+}
+
 return g_steal_pointer(&cfg);

[PATCH v2 00/22] integrate auto-shutdown of VMs with daemons

2025-03-12 Thread Daniel P . Berrangé
This series starts the work needed to obsolete the libvirt-guests.sh
script which has grown a surprisingly large amount of functionality.

Currently the virt daemons will acquire inhibitors to delay OS shutdown
when VMs are running. The libvirt-guests.service unit can be used to
call libvirt-guests.sh to shutdown running VMs on system shutdown.

This split is a bad architecture because libvirt-guests.service will
only run once the system has decided to initiate the shutdown sequence.

When the user requests as shutdown while inhibitors are present, logind
will emit a "PrepareForShutdown" signal over dbus. Applications are
supposed to respond to this by preserving state & releasing their
inhibitors, which in turns allows shutdown to be initiated.

The remote daemon already has support for listening for the
"PrepareForShutdown" signal, but only does this for session instances,
not system instances.

This series essentially takes that logic and expands it to run in the
system instances too, thus conceptually making libvirt-guests.service
obsolete.

It is slightly more complicated than that though for many reasons...

Saving running VMs can take a very long time. The inhibitor delay
can be as low as 5 seconds, and when killing a service, systemd may
not wait very long for it to terminate. libvirt-guests.service deals
with this by setting TimeoutStopSecs=0 to make systemd wait forever.

This is undesirable to set in libvirtd.service though, as we would
like systemd to kill the daemon aggressively if it hangs. The series
thus uses the notification protocol to request systemd give it more
time to shutdown, as long as we're in the phase of saving running
VMs. A bug in this code will still result in systemd waiting forever,
which is no different from libvirt-guests.service, but a bug in any
other part of the libvirt daemon shutdown code will result in systemd
killing us.

The existing logic for saving VMs in the session daemons had many
feature gaps compared to libvirt-guests.sh. Thus there is code to
add support

 * Requesting graceful OS shutdown if managed save failed
 * Force poweroff of VMs if no other action worked
 * Optionally enabling/disabling use of managed save,
   graceful shutdown and force poweroff, which is more flexible
   than ON_SHUTDOWN=nnn, as we can try the whole sequence of
   options
 * Ability to bypass cache in managed save
 * Support for one-time autostart of VMs as an official API

To aid in testing this logic, virt-admin gains a new command

 'virt-admin daemon-shutdown --preserve'

All this new functionality is wired up into the QEMU driver, and is
made easily accessible to other hypervisor drivers, so would easily
be extendable to Xen, CH, LXC drivers, but this is not done in this
series. IOW, libvirt-guests.service is not yet fully obsolete.

The new functionality is also not enabled by default for the system
daemon, it requires explicit admin changes to /etc/libvirt/qemu.conf
to enable it. This is because it would clash with execution of the
libvirt-guests.service if both were enabled.

It is highly desirable that we enable this by default though, so we
need to figure out a upgrade story wrt libvirt-guests.service.

The only libvirt-guests.sh features not implemented are:

 * PARALLEL_SHUTDOWN=nn.

   When doing a graceful shutdown we initiate it on every single VM
   at once, and then monitor progress of all of them in parallel.

 * SYNC_TIME=nn

   When make not attempt to sync guest time when restoring from
   managed save. This ought to be fixed

Daniel P. Berrangé (22):
  hypervisor: move support for auto-shutdown out of QEMU driver
  remote: always invoke virStateStop for all daemons
  hypervisor: expand available shutdown actions
  hypervisor: custom shutdown actions for transient vs persistent VMs
  qemu: support automatic VM managed save in system daemon
  qemu: improve shutdown defaults for session daemon
  qemu: configurable delay for shutdown before poweroff
  hypervisor: support bypassing cache for managed save
  qemu: add config parameter to control auto-save bypass cache
  src: add new APIs for marking a domain to autostart once
  conf: implement support for autostart once feature
  hypervisor: wire up support for auto restore of running domains
  qemu: wire up support for once only autostart
  qemu: add config to control if auto-shutdown VMs are restored
  src: clarify semantics of the various virStateNNN methods
  rpc: rename virNetDaemonSetShutdownCallbacks
  rpc: move state stop into virNetDaemon class
  rpc: don't unconditionally quit after preserving state
  rpc: fix shutdown sequence when preserving state
  admin: add 'daemon-shutdown' command
  rpc: don't let systemd shutdown daemon while saving VMs
  hypervisor: emit systemd status & log messages while saving

 docs/manpages/virt-admin.rst|  13 ++
 include/libvirt/libvirt-admin.h |  13 ++
 include/libvirt/libvirt-domain.h|   4 +
 src/admin/admin_protocol.x  |  11 +-
 src/admin/admin_se

[PATCH v2 06/22] qemu: improve shutdown defaults for session daemon

2025-03-12 Thread Daniel P . Berrangé
Currently the session daemon will try a managed save on all VMs,
leaving them running if that fails.

This limits the managed save just to persistent VMs, as there will
usually not be any way to restore transient VMs later.

It also enables graceful shutdown and then forced poweroff, should
save fail for some reason.

These new defaults can be overridden in the config file if needed.

Reviewed-by: Peter Krempa 
Signed-off-by: Daniel P. Berrangé 
---
 src/qemu/qemu.conf.in  | 10 ++
 src/qemu/qemu_conf.c   |  4 ++--
 src/qemu/test_libvirtd_qemu.aug.in |  4 ++--
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/src/qemu/qemu.conf.in b/src/qemu/qemu.conf.in
index a674e11388..c9bebf1e7d 100644
--- a/src/qemu/qemu.conf.in
+++ b/src/qemu/qemu.conf.in
@@ -664,22 +664,24 @@
 # managed save. If managed save is enabled, shutdown will
 # be tried only on failure to perform managed save.
 #
-# Defaults to "none"
+# Defaults to "all" for session daemons and "none" for
+# system daemons
 #
 # If 'libvirt-guests.service' is enabled, then this must be
 # set to 'none' for system daemons to avoid dueling actions
-#auto_shutdown_try_shutdown = "none"
+#auto_shutdown_try_shutdown = "all"
 
 # As above, but with a forced poweroff instead of managed
 # save. If managed save or graceful shutdown are enabled,
 # forced poweroff will be tried only on failure of the
 # other options.
 #
-# Defaults to "none"
+# Defaults to "all" for session daemons and "none" for
+# system daemons.
 #
 # If 'libvirt-guests.service' is enabled, then this must be
 # set to 'none' for system daemons to avoid dueling actions
-#auto_shutdown_poweroff = "none"
+#auto_shutdown_poweroff = "all"
 
 # If provided by the host and a hugetlbfs mount point is configured,
 # a guest may request huge page backing.  When this mount point is
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index 8554558201..f069ed00b1 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -315,8 +315,8 @@ virQEMUDriverConfig *virQEMUDriverConfigNew(bool privileged,
 cfg->autoShutdownPoweroff = VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE;
 } else {
 cfg->autoShutdownTrySave = 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_PERSISTENT;
-cfg->autoShutdownTryShutdown = 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE;
-cfg->autoShutdownPoweroff = VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE;
+cfg->autoShutdownTryShutdown = 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_ALL;
+cfg->autoShutdownPoweroff = VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_ALL;
 }
 
 return g_steal_pointer(&cfg);
diff --git a/src/qemu/test_libvirtd_qemu.aug.in 
b/src/qemu/test_libvirtd_qemu.aug.in
index bb216483a7..3bc8104d7a 100644
--- a/src/qemu/test_libvirtd_qemu.aug.in
+++ b/src/qemu/test_libvirtd_qemu.aug.in
@@ -77,8 +77,8 @@ module Test_libvirtd_qemu =
 { "auto_start_bypass_cache" = "0" }
 { "auto_start_delay" = "0" }
 { "auto_shutdown_try_save" = "persistent" }
-{ "auto_shutdown_try_shutdown" = "none" }
-{ "auto_shutdown_poweroff" = "none" }
+{ "auto_shutdown_try_shutdown" = "all" }
+{ "auto_shutdown_poweroff" = "all" }
 { "hugetlbfs_mount" = "/dev/hugepages" }
 { "bridge_helper" = "qemu-bridge-helper" }
 { "set_process_name" = "1" }
-- 
2.48.1


[PATCH v2 12/22] hypervisor: wire up support for auto restore of running domains

2025-03-12 Thread Daniel P . Berrangé
When performing auto-shutdown of running domains, there is now the
option to mark them as "autostart once",  so that their state is
restored on next boot. This applies on top of the traditional
autostart flag.

Reviewed-by: Peter Krempa 
Signed-off-by: Daniel P. Berrangé 
---
 src/hypervisor/domain_driver.c | 19 +--
 src/hypervisor/domain_driver.h |  1 +
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/src/hypervisor/domain_driver.c b/src/hypervisor/domain_driver.c
index bfba435ee0..1105d36388 100644
--- a/src/hypervisor/domain_driver.c
+++ b/src/hypervisor/domain_driver.c
@@ -738,12 +738,12 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 virDomainPtr *domains = NULL;
 g_autofree bool *transient = NULL;
 
-VIR_DEBUG("Run autoshutdown uri=%s trySave=%s tryShutdown=%s poweroff=%s 
waitShutdownSecs=%u saveBypassCache=%d",
+VIR_DEBUG("Run autoshutdown uri=%s trySave=%s tryShutdown=%s poweroff=%s 
waitShutdownSecs=%u saveBypassCache=%d autoRestore=%d",
   cfg->uri,
   virDomainDriverAutoShutdownScopeTypeToString(cfg->trySave),
   virDomainDriverAutoShutdownScopeTypeToString(cfg->tryShutdown),
   virDomainDriverAutoShutdownScopeTypeToString(cfg->poweroff),
-  cfg->waitShutdownSecs, cfg->saveBypassCache);
+  cfg->waitShutdownSecs, cfg->saveBypassCache, cfg->autoRestore);
 
 /*
  * Ideally guests will shutdown in a few seconds, but it would
@@ -788,6 +788,21 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 for (i = 0; i < numDomains; i++) {
 if (virDomainIsPersistent(domains[i]) == 0)
 transient[i] = true;
+
+if (cfg->autoRestore) {
+if (transient[i]) {
+VIR_DEBUG("Cannot auto-restore transient VM %s",
+  virDomainGetName(domains[i]));
+} else {
+VIR_DEBUG("Mark %s for autostart on next boot",
+  virDomainGetName(domains[i]));
+if (virDomainSetAutostartOnce(domains[i], 1) < 0) {
+VIR_WARN("Unable to mark domain '%s' for auto restore: %s",
+ virDomainGetName(domains[i]),
+ virGetLastErrorMessage());
+}
+}
+}
 }
 
 if (cfg->trySave != VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_NONE) {
diff --git a/src/hypervisor/domain_driver.h b/src/hypervisor/domain_driver.h
index fae316ee2d..d90466b942 100644
--- a/src/hypervisor/domain_driver.h
+++ b/src/hypervisor/domain_driver.h
@@ -113,6 +113,7 @@ typedef struct _virDomainDriverAutoShutdownConfig {
 * If 0 a default is used (currently 30 
secs)
 */
 bool saveBypassCache;
+bool autoRestore;
 } virDomainDriverAutoShutdownConfig;
 
 void virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg);
-- 
2.48.1


Re: [PATCH 0/5] qemu: Introduce nvme disk emulation support

2025-03-12 Thread honglei . wang
> On Thu, Feb 20, 2025 at 11:35:01AM -, honglei.wang(a)smartx.com wrote:
> 
> The first approach can be rewritten to the second one and is only an
> easier to write way.  I would suggest we use the second one and we can
> add a functionality which will rewrite the first one to the second one
> upon definition.  To make it easier to unplug we could, theoretically,
> see if the disk is the only namespace of the controller and unplug that,
> but that's just a nice-to-have, I guess.
Yes, the latter is more general, and I agree that unifying to the second 
approach makes sense. I will submit another patch that only includes the second 
approach. As for hot-unplugging behavior, we can optimize it in the future if 
needed, as Martin mentioned.


Re: [PATCH] domainbackupxml2xml: Add test case with unix socket server for pull mode backup

2025-03-12 Thread Ján Tomko

On a Wednesday in 2025, Peter Krempa wrote:

While we show the example in the docs we don't have an example XML for
exrecrcising the parser/formatter and schema.



exercising


Signed-off-by: Peter Krempa 
---
.../backup-pull-unix.xml  | 22 ++
.../backup-pull-unix.xml  | 23 +++
tests/genericxml2xmltest.c|  1 +
3 files changed, 46 insertions(+)
create mode 100644 tests/domainbackupxml2xmlin/backup-pull-unix.xml
create mode 100644 tests/domainbackupxml2xmlout/backup-pull-unix.xml



Reviewed-by: Ján Tomko 

Jano


signature.asc
Description: PGP signature


[PATCH 0/2] Remove libnl checks specific to Linux

2025-03-12 Thread Akihiko Odaki
Supersedes: <20250308-require-v1-1-fe6850bf3...@daynix.com>
("[PATCH] util: Require libnl for Linux")

Based on previous discussions, this implements yet another approach to
fix builds without libnl for Linux.

Signed-off-by: Akihiko Odaki 
---
Akihiko Odaki (2):
  Remove libnl checks specific to Linux
  virnetlink: Remove stub functions

 src/libvirt_libnl.syms |  23 ++
 src/libvirt_private.syms   |  19 -
 src/meson.build|   6 ++
 src/remote/remote_daemon.c |  10 ++-
 src/util/meson.build   |   5 +-
 src/util/virarptable.c |   2 +-
 src/util/virnetdev.c   |  44 +-
 src/util/virnetdevbridge.c |  33 ++--
 src/util/virnetlink.c  | 196 +++--
 src/util/virnetlink.h  |  11 +--
 tests/virnetdevtest.c  |  10 +--
 11 files changed, 65 insertions(+), 294 deletions(-)
---
base-commit: e5299ddf86121d3c792ca271ffcb54900eb19dc3
change-id: 20250312-linux-30e6e1af51a5

Best regards,
-- 
Akihiko Odaki 


[PATCH 2/2] virnetlink: Remove stub functions

2025-03-12 Thread Akihiko Odaki
virnetlink provides stub functions when libnl is not available, but this
approach caused a few problems:

- Commit 582f0966f9b9e2148d8887d072364e2a91aed000 broke builds without
  libnl for Linux because virNetlinkBridgeVlanFilterSet() lacked a stub.

- A call to virNetlinkEventServiceStopAll() stub in
  src/remote/remote_daemon.c resulted in extra debug logs.

Instead of providing stub functions, let callers check the availability
of libnl. Callers are also replaced with stub functions in most cases
so this approach requires less code and less error-prone.

Signed-off-by: Akihiko Odaki 
---
 src/libvirt_libnl.syms |  23 ++
 src/libvirt_private.syms   |  19 -
 src/meson.build|   6 ++
 src/remote/remote_daemon.c |  10 ++-
 src/util/meson.build   |   5 +-
 src/util/virarptable.c |   2 +-
 src/util/virnetlink.c  | 196 +++--
 src/util/virnetlink.h  |  11 +--
 8 files changed, 55 insertions(+), 217 deletions(-)

diff --git a/src/libvirt_libnl.syms b/src/libvirt_libnl.syms
new file mode 100644
index 00..091169416b
--- /dev/null
+++ b/src/libvirt_libnl.syms
@@ -0,0 +1,23 @@
+# util/virnetlink.h
+virNetlinkCommand;
+virNetlinkDelLink;
+virNetlinkDumpCommand;
+virNetlinkDumpLink;
+virNetlinkEventAddClient;
+virNetlinkEventRemoveClient;
+virNetlinkEventServiceIsRunning;
+virNetlinkEventServiceLocalPid;
+virNetlinkEventServiceStart;
+virNetlinkEventServiceStop;
+virNetlinkEventServiceStopAll;
+virNetlinkGetErrorCode;
+virNetlinkGetNeighbor;
+virNetlinkNewLink;
+virNetlinkShutdown;
+virNetlinkStartup;
+
+
+# Let emacs know we want case-insensitive sorting
+# Local Variables:
+# sort-fold-case: t
+# End:
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index 30a9f806f0..0ce20291df 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -3022,25 +3022,6 @@ virNetDevVPortProfileOpTypeFromString;
 virNetDevVPortProfileOpTypeToString;
 
 
-# util/virnetlink.h
-virNetlinkCommand;
-virNetlinkDelLink;
-virNetlinkDumpCommand;
-virNetlinkDumpLink;
-virNetlinkEventAddClient;
-virNetlinkEventRemoveClient;
-virNetlinkEventServiceIsRunning;
-virNetlinkEventServiceLocalPid;
-virNetlinkEventServiceStart;
-virNetlinkEventServiceStop;
-virNetlinkEventServiceStopAll;
-virNetlinkGetErrorCode;
-virNetlinkGetNeighbor;
-virNetlinkNewLink;
-virNetlinkShutdown;
-virNetlinkStartup;
-
-
 # util/virnodesuspend.h
 virNodeSuspend;
 virNodeSuspendGetTargetMask;
diff --git a/src/meson.build b/src/meson.build
index 9413192a55..989d3113d9 100644
--- a/src/meson.build
+++ b/src/meson.build
@@ -131,6 +131,12 @@ else
   sym_files += 'libvirt_hyperv.syms'
 endif
 
+if conf.has('WITH_LIBNL')
+  used_sym_files += 'libvirt_libnl.syms'
+else
+  sym_files += 'libvirt_libnl.syms'
+endif
+
 # variables filled by subdirectories
 
 libvirt_libs = []
diff --git a/src/remote/remote_daemon.c b/src/remote/remote_daemon.c
index d90355c0d2..f62dc0a837 100644
--- a/src/remote/remote_daemon.c
+++ b/src/remote/remote_daemon.c
@@ -1021,10 +1021,12 @@ int main(int argc, char **argv) {
 }
 umask(old_umask);
 
+#ifdef WITH_LIBNL
 if (virNetlinkStartup() < 0) {
 ret = VIR_DAEMON_ERR_INIT;
 goto cleanup;
 }
+#endif
 
 if (!(dmn = virNetDaemonNew())) {
 ret = VIR_DAEMON_ERR_DRIVER;
@@ -1188,7 +1190,7 @@ int main(int argc, char **argv) {
 goto cleanup;
 }
 
-#if defined(__linux__) && defined(NETLINK_ROUTE)
+#if defined(WITH_LIBNL) && defined(NETLINK_ROUTE)
 /* Register the netlink event service for NETLINK_ROUTE */
 if (virNetlinkEventServiceStart(NETLINK_ROUTE, 0) < 0) {
 ret = VIR_DAEMON_ERR_NETWORK;
@@ -1196,7 +1198,7 @@ int main(int argc, char **argv) {
 }
 #endif
 
-#if defined(__linux__) && defined(NETLINK_KOBJECT_UEVENT)
+#if defined(WITH_LIBNL) && defined(NETLINK_KOBJECT_UEVENT)
 /* Register the netlink event service for NETLINK_KOBJECT_UEVENT */
 if (virNetlinkEventServiceStart(NETLINK_KOBJECT_UEVENT, 1) < 0) {
 ret = VIR_DAEMON_ERR_NETWORK;
@@ -1213,7 +1215,9 @@ int main(int argc, char **argv) {
 0, "shutdown", NULL, NULL);
 
  cleanup:
+#ifdef WITH_LIBNL
 virNetlinkEventServiceStopAll();
+#endif
 
 if (g_atomic_int_compare_and_exchange(&driversInitialized, 1, 0)) {
 /* NB: Possible issue with timing window between driversInitialized
@@ -1229,7 +1233,9 @@ int main(int argc, char **argv) {
 virObjectUnref(srv);
 virObjectUnref(dmn);
 
+#ifdef WITH_LIBNL
 virNetlinkShutdown();
+#endif
 
 if (pid_file_fd != -1)
 virPidFileReleasePath(pid_file, pid_file_fd);
diff --git a/src/util/meson.build b/src/util/meson.build
index 69ef49139a..992b0441c9 100644
--- a/src/util/meson.build
+++ b/src/util/meson.build
@@ -69,7 +69,6 @@ util_sources = [
   'virnetdevveth.c',
   'virnetdevvlan.c',
   'virnetdevvportprofile.c',
-  'virnetlink.c',
   'virnodesuspend.c',
   'virnuma.c',
   'virnvme.c',
@@ -112,6 +111,10 @@ util_sourc

[PATCH] domainbackupxml2xml: Add test case with unix socket server for pull mode backup

2025-03-12 Thread Peter Krempa
While we show the example in the docs we don't have an example XML for
exrecrcising the parser/formatter and schema.

Signed-off-by: Peter Krempa 
---
 .../backup-pull-unix.xml  | 22 ++
 .../backup-pull-unix.xml  | 23 +++
 tests/genericxml2xmltest.c|  1 +
 3 files changed, 46 insertions(+)
 create mode 100644 tests/domainbackupxml2xmlin/backup-pull-unix.xml
 create mode 100644 tests/domainbackupxml2xmlout/backup-pull-unix.xml

diff --git a/tests/domainbackupxml2xmlin/backup-pull-unix.xml 
b/tests/domainbackupxml2xmlin/backup-pull-unix.xml
new file mode 100644
index 00..cde9ceafc9
--- /dev/null
+++ b/tests/domainbackupxml2xmlin/backup-pull-unix.xml
@@ -0,0 +1,22 @@
+
+  1525889631
+  
+  
+
+  
+
+
+
+  
+
+
+  
+
+
+  
+
+
+  
+
+  
+
diff --git a/tests/domainbackupxml2xmlout/backup-pull-unix.xml 
b/tests/domainbackupxml2xmlout/backup-pull-unix.xml
new file mode 100644
index 00..811edfdf39
--- /dev/null
+++ b/tests/domainbackupxml2xmlout/backup-pull-unix.xml
@@ -0,0 +1,23 @@
+
+  1525889631
+  
+  
+
+  
+
+
+
+  
+
+
+  
+
+
+  
+
+
+  
+
+
+  
+
diff --git a/tests/genericxml2xmltest.c b/tests/genericxml2xmltest.c
index 61613d21f9..b46b9515c3 100644
--- a/tests/genericxml2xmltest.c
+++ b/tests/genericxml2xmltest.c
@@ -244,6 +244,7 @@ mymain(void)

 DO_TEST_BACKUP("empty");
 DO_TEST_BACKUP("backup-pull");
+DO_TEST_BACKUP("backup-pull-unix");
 DO_TEST_BACKUP("backup-pull-seclabel");
 DO_TEST_BACKUP("backup-pull-encrypted");
 DO_TEST_BACKUP("backup-push");
-- 
2.48.1


[PATCH v2 10/22] src: add new APIs for marking a domain to autostart once

2025-03-12 Thread Daniel P . Berrangé
When a domain is marked for autostart, it will be started on every
subsequent host OS boot. There may be times when it is desirable to
mark a domain to be autostarted, on the next boot only.

Thus we add virDomainSetAutostartOnce / virDomainGetAutostartOnce.

An alternative would have been to overload the existing
virDomainSetAutostart method, to accept values '1' or '2' for
the autostart flag. This was not done because it is expected
that language bindings will have mapped the current autostart
flag to a boolean, and thus turning it into an enum would create
a compatibility problem.

A further alternative would have been to create a new method
virDomainSetAutostartFlags, with a VIR_DOMAIN_AUTOSTART_ONCE
flag defined. This was not done because it is felt desirable
to clearly separate the two flags. Setting the "once" flag
should not interfere with existing autostart setting, whether
it is enabled or disabled currently.

The 'virsh autostart' command, however, is still overloaded
by just adding a --once flag, while current state is added
to 'virsh dominfo'.

No ability to filter by 'autostart once' status is added to
the domain list APIs. The most common use of autostart once
will be to automatically set it on host shutdown, and it be
cleared on host startup. Thus there would rarely be scenarios
in which a running app will need to filter on this new flag.

Signed-off-by: Daniel P. Berrangé 
---
 include/libvirt/libvirt-domain.h |  4 ++
 src/driver-hypervisor.h  | 10 
 src/libvirt-domain.c | 87 
 src/libvirt_public.syms  |  6 +++
 src/remote/remote_driver.c   |  2 +
 src/remote/remote_protocol.x | 30 ++-
 src/remote_protocol-structs  | 12 +
 src/rpc/gendispatch.pl   |  4 +-
 tools/virsh-domain-monitor.c |  7 +++
 tools/virsh-domain.c | 39 ++
 10 files changed, 189 insertions(+), 12 deletions(-)

diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index 8c86bd8f94..c4e768f26f 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -2429,6 +2429,10 @@ int virDomainGetAutostart   
(virDomainPtr domain,
  int *autostart);
 int virDomainSetAutostart   (virDomainPtr domain,
  int autostart);
+int virDomainGetAutostartOnce(virDomainPtr domain,
+  int *autostart);
+int virDomainSetAutostartOnce(virDomainPtr domain,
+  int autostart);
 
 /**
  * virVcpuState:
diff --git a/src/driver-hypervisor.h b/src/driver-hypervisor.h
index 4ce8da078d..c05c71b9fe 100644
--- a/src/driver-hypervisor.h
+++ b/src/driver-hypervisor.h
@@ -478,6 +478,14 @@ typedef int
 (*virDrvDomainSetAutostart)(virDomainPtr domain,
 int autostart);
 
+typedef int
+(*virDrvDomainGetAutostartOnce)(virDomainPtr domain,
+int *autostart);
+
+typedef int
+(*virDrvDomainSetAutostartOnce)(virDomainPtr domain,
+int autostart);
+
 typedef char *
 (*virDrvDomainGetSchedulerType)(virDomainPtr domain,
 int *nparams);
@@ -1564,6 +1572,8 @@ struct _virHypervisorDriver {
 virDrvDomainDetachDeviceAlias domainDetachDeviceAlias;
 virDrvDomainGetAutostart domainGetAutostart;
 virDrvDomainSetAutostart domainSetAutostart;
+virDrvDomainGetAutostartOnce domainGetAutostartOnce;
+virDrvDomainSetAutostartOnce domainSetAutostartOnce;
 virDrvDomainGetSchedulerType domainGetSchedulerType;
 virDrvDomainGetSchedulerParameters domainGetSchedulerParameters;
 virDrvDomainGetSchedulerParametersFlags domainGetSchedulerParametersFlags;
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index b23a3489c5..4e78c687d5 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -7344,6 +7344,93 @@ virDomainSetAutostart(virDomainPtr domain,
 }
 
 
+/**
+ * virDomainGetAutostartOnce:
+ * @domain: a domain object
+ * @autostart: the value returned
+ *
+ * Provides a boolean value indicating whether the domain
+ * is configured to be automatically started the next time
+ * the host machine boots only.
+ *
+ * Returns -1 in case of error, 0 in case of success
+ *
+ * Since: 11.2.0
+ */
+int
+virDomainGetAutostartOnce(virDomainPtr domain,
+  int *autostart)
+{
+virConnectPtr conn;
+
+VIR_DOMAIN_DEBUG(domain, "autostart=%p", autostart);
+
+virResetLastError();
+
+virCheckDomainReturn(domain, -1);
+virCheckNonNullArgGoto(autostart, error);
+
+conn = domain->conn;
+
+if (conn->driver->domainGetAutostartOnce) {
+int ret;
+ret = conn->driver->domainGetAutostartOnce(domain, autostart);
+if (ret < 0)
+goto err

[PATCH v2 07/22] qemu: configurable delay for shutdown before poweroff

2025-03-12 Thread Daniel P . Berrangé
Allow users to control how many seconds libvirt waits for QEMU
shutdown before force powering off a guest.

Reviewed-by: Peter Krempa 
Signed-off-by: Daniel P. Berrangé 
---
 src/qemu/libvirtd_qemu.aug | 1 +
 src/qemu/qemu.conf.in  | 6 ++
 src/qemu/qemu_conf.c   | 4 
 src/qemu/qemu_conf.h   | 1 +
 src/qemu/qemu_driver.c | 1 +
 src/qemu/test_libvirtd_qemu.aug.in | 1 +
 6 files changed, 14 insertions(+)

diff --git a/src/qemu/libvirtd_qemu.aug b/src/qemu/libvirtd_qemu.aug
index 605604a01a..8cb1b144b9 100644
--- a/src/qemu/libvirtd_qemu.aug
+++ b/src/qemu/libvirtd_qemu.aug
@@ -101,6 +101,7 @@ module Libvirtd_qemu =
  | str_entry "auto_shutdown_try_save"
  | str_entry "auto_shutdown_try_shutdown"
  | str_entry "auto_shutdown_poweroff"
+ | int_entry "auto_shutdown_wait"
 
let process_entry = str_entry "hugetlbfs_mount"
  | str_entry "bridge_helper"
diff --git a/src/qemu/qemu.conf.in b/src/qemu/qemu.conf.in
index c9bebf1e7d..5a9bfd90e8 100644
--- a/src/qemu/qemu.conf.in
+++ b/src/qemu/qemu.conf.in
@@ -683,6 +683,12 @@
 # set to 'none' for system daemons to avoid dueling actions
 #auto_shutdown_poweroff = "all"
 
+# How may seconds to wait for running VMs to gracefully shutdown
+# when 'auto_shutdown_try_shutdown' is enabled. If set to 0
+# then an arbitrary built-in default value will be used (which
+# is currently 30 secs)
+#auto_shutdown_wait = 30
+
 # If provided by the host and a hugetlbfs mount point is configured,
 # a guest may request huge page backing.  When this mount point is
 # unspecified here, determination of a host mount point in /proc/mounts
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index f069ed00b1..d1ae5fa308 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -729,6 +729,10 @@ virQEMUDriverConfigLoadSaveEntry(virQEMUDriverConfig *cfg,
 cfg->autoShutdownPoweroff = autoShutdownScopeVal;
 }
 
+if (virConfGetValueUInt(conf, "auto_shutdown_wait",
+&cfg->autoShutdownWait) < 0)
+return -1;
+
 return 0;
 }
 
diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h
index 3450f277f0..f6fcfe444d 100644
--- a/src/qemu/qemu_conf.h
+++ b/src/qemu/qemu_conf.h
@@ -205,6 +205,7 @@ struct _virQEMUDriverConfig {
 virDomainDriverAutoShutdownScope autoShutdownTrySave;
 virDomainDriverAutoShutdownScope autoShutdownTryShutdown;
 virDomainDriverAutoShutdownScope autoShutdownPoweroff;
+unsigned int autoShutdownWait;
 
 char *lockManagerName;
 
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index cb7b03e391..08e2248852 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -950,6 +950,7 @@ qemuStateStop(void)
 .trySave = cfg->autoShutdownTrySave,
 .tryShutdown = cfg->autoShutdownTryShutdown,
 .poweroff = cfg->autoShutdownPoweroff,
+.waitShutdownSecs = cfg->autoShutdownWait,
 };
 
 virDomainDriverAutoShutdown(&ascfg);
diff --git a/src/qemu/test_libvirtd_qemu.aug.in 
b/src/qemu/test_libvirtd_qemu.aug.in
index 3bc8104d7a..4c6de31199 100644
--- a/src/qemu/test_libvirtd_qemu.aug.in
+++ b/src/qemu/test_libvirtd_qemu.aug.in
@@ -79,6 +79,7 @@ module Test_libvirtd_qemu =
 { "auto_shutdown_try_save" = "persistent" }
 { "auto_shutdown_try_shutdown" = "all" }
 { "auto_shutdown_poweroff" = "all" }
+{ "auto_shutdown_wait" = "30" }
 { "hugetlbfs_mount" = "/dev/hugepages" }
 { "bridge_helper" = "qemu-bridge-helper" }
 { "set_process_name" = "1" }
-- 
2.48.1


[PATCH v2 01/22] hypervisor: move support for auto-shutdown out of QEMU driver

2025-03-12 Thread Daniel P . Berrangé
This is a move of the code that currently exists in the QEMU
driver, into the common layer that can be used by multiple
drivers.

The code currently supports performing managed save of all
running guests, ignoring any failures.

Reviewed-by: Peter Krempa 
Signed-off-by: Daniel P. Berrangé 
---
 src/hypervisor/domain_driver.c | 48 ++
 src/hypervisor/domain_driver.h |  6 +
 src/libvirt_private.syms   |  1 +
 src/qemu/qemu_driver.c | 47 -
 4 files changed, 60 insertions(+), 42 deletions(-)

diff --git a/src/hypervisor/domain_driver.c b/src/hypervisor/domain_driver.c
index b7499a376f..0ca482275f 100644
--- a/src/hypervisor/domain_driver.c
+++ b/src/hypervisor/domain_driver.c
@@ -712,3 +712,51 @@ virDomainDriverAutoStart(virDomainObjList *domains,
 
 virDomainObjListForEach(domains, false, virDomainDriverAutoStartOne, 
&state);
 }
+
+
+void
+virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
+{
+g_autoptr(virConnect) conn = NULL;
+int numDomains = 0;
+size_t i;
+int state;
+virDomainPtr *domains = NULL;
+g_autofree unsigned int *flags = NULL;
+
+if (!(conn = virConnectOpen(cfg->uri)))
+goto cleanup;
+
+if ((numDomains = virConnectListAllDomains(conn,
+   &domains,
+   
VIR_CONNECT_LIST_DOMAINS_ACTIVE)) < 0)
+goto cleanup;
+
+flags = g_new0(unsigned int, numDomains);
+
+/* First we pause all VMs to make them stop dirtying
+   pages, etc. We remember if any VMs were paused so
+   we can restore that on resume. */
+for (i = 0; i < numDomains; i++) {
+flags[i] = VIR_DOMAIN_SAVE_RUNNING;
+if (virDomainGetState(domains[i], &state, NULL, 0) == 0) {
+if (state == VIR_DOMAIN_PAUSED)
+flags[i] = VIR_DOMAIN_SAVE_PAUSED;
+}
+virDomainSuspend(domains[i]);
+}
+
+/* Then we save the VMs to disk */
+for (i = 0; i < numDomains; i++)
+if (virDomainManagedSave(domains[i], flags[i]) < 0)
+VIR_WARN("Unable to perform managed save of '%s': %s",
+ virDomainGetName(domains[i]),
+ virGetLastErrorMessage());
+
+ cleanup:
+if (domains) {
+for (i = 0; i < numDomains; i++)
+virObjectUnref(domains[i]);
+VIR_FREE(domains);
+}
+}
diff --git a/src/hypervisor/domain_driver.h b/src/hypervisor/domain_driver.h
index f81d436c2c..f36db5c6f0 100644
--- a/src/hypervisor/domain_driver.h
+++ b/src/hypervisor/domain_driver.h
@@ -90,3 +90,9 @@ typedef struct _virDomainDriverAutoStartConfig {
 
 void virDomainDriverAutoStart(virDomainObjList *domains,
   virDomainDriverAutoStartConfig *cfg);
+
+typedef struct _virDomainDriverAutoShutdownConfig {
+const char *uri;
+} virDomainDriverAutoShutdownConfig;
+
+void virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg);
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index e63939e2b5..e6fec0a151 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -1649,6 +1649,7 @@ virDomainCgroupSetupVcpuBW;
 
 # hypervisor/domain_driver.h
 virDomainDriverAddIOThreadCheck;
+virDomainDriverAutoShutdown;
 virDomainDriverAutoStart;
 virDomainDriverDelIOThreadCheck;
 virDomainDriverGenerateMachineName;
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 2364b4d312..14b128d268 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -944,51 +944,14 @@ qemuStateReload(void)
 static int
 qemuStateStop(void)
 {
-int ret = -1;
-g_autoptr(virConnect) conn = NULL;
-int numDomains = 0;
-size_t i;
-int state;
-virDomainPtr *domains = NULL;
-g_autofree unsigned int *flags = NULL;
 g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(qemu_driver);
+virDomainDriverAutoShutdownConfig ascfg = {
+.uri = cfg->uri,
+};
 
-if (!(conn = virConnectOpen(cfg->uri)))
-goto cleanup;
-
-if ((numDomains = virConnectListAllDomains(conn,
-   &domains,
-   
VIR_CONNECT_LIST_DOMAINS_ACTIVE)) < 0)
-goto cleanup;
-
-flags = g_new0(unsigned int, numDomains);
-
-/* First we pause all VMs to make them stop dirtying
-   pages, etc. We remember if any VMs were paused so
-   we can restore that on resume. */
-for (i = 0; i < numDomains; i++) {
-flags[i] = VIR_DOMAIN_SAVE_RUNNING;
-if (virDomainGetState(domains[i], &state, NULL, 0) == 0) {
-if (state == VIR_DOMAIN_PAUSED)
-flags[i] = VIR_DOMAIN_SAVE_PAUSED;
-}
-virDomainSuspend(domains[i]);
-}
-
-ret = 0;
-/* Then we save the VMs to disk */
-for (i = 0; i < numDomains; i++)
-if (virDomainManagedSave(domains[i], flags[i]) < 0)
- 

[PATCH v2 08/22] hypervisor: support bypassing cache for managed save

2025-03-12 Thread Daniel P . Berrangé
Bypassing cache can make save performance more predictable and avoids
trashing the OS cache with data that will not be read again.

Reviewed-by: Peter Krempa 
Signed-off-by: Daniel P. Berrangé 
---
 src/hypervisor/domain_driver.c | 7 +--
 src/hypervisor/domain_driver.h | 1 +
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/src/hypervisor/domain_driver.c b/src/hypervisor/domain_driver.c
index c510e1d2ae..d4cf09174b 100644
--- a/src/hypervisor/domain_driver.c
+++ b/src/hypervisor/domain_driver.c
@@ -730,12 +730,12 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 virDomainPtr *domains = NULL;
 g_autofree bool *transient = NULL;
 
-VIR_DEBUG("Run autoshutdown uri=%s trySave=%s tryShutdown=%s poweroff=%s 
waitShutdownSecs=%u",
+VIR_DEBUG("Run autoshutdown uri=%s trySave=%s tryShutdown=%s poweroff=%s 
waitShutdownSecs=%u saveBypassCache=%d",
   cfg->uri,
   virDomainDriverAutoShutdownScopeTypeToString(cfg->trySave),
   virDomainDriverAutoShutdownScopeTypeToString(cfg->tryShutdown),
   virDomainDriverAutoShutdownScopeTypeToString(cfg->poweroff),
-  cfg->waitShutdownSecs);
+  cfg->waitShutdownSecs, cfg->saveBypassCache);
 
 /*
  * Ideally guests will shutdown in a few seconds, but it would
@@ -801,6 +801,9 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 if (state == VIR_DOMAIN_PAUSED)
 flags[i] = VIR_DOMAIN_SAVE_PAUSED;
 }
+if (cfg->saveBypassCache)
+flags[i] |= VIR_DOMAIN_SAVE_BYPASS_CACHE;
+
 if (flags[i] & VIR_DOMAIN_SAVE_RUNNING)
 virDomainSuspend(domains[i]);
 }
diff --git a/src/hypervisor/domain_driver.h b/src/hypervisor/domain_driver.h
index 6e535ca444..fae316ee2d 100644
--- a/src/hypervisor/domain_driver.h
+++ b/src/hypervisor/domain_driver.h
@@ -112,6 +112,7 @@ typedef struct _virDomainDriverAutoShutdownConfig {
 * before moving onto next action.
 * If 0 a default is used (currently 30 
secs)
 */
+bool saveBypassCache;
 } virDomainDriverAutoShutdownConfig;
 
 void virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg);
-- 
2.48.1


[PATCH v2 16/22] rpc: rename virNetDaemonSetShutdownCallbacks

2025-03-12 Thread Daniel P . Berrangé
The next patch will be introducing a new callback, so rename the method
to virNetDaemonSetLifecycleCallbacks to reflect the more general usage.

Signed-off-by: Daniel P. Berrangé 
---
 src/libvirt_remote.syms|  2 +-
 src/remote/remote_daemon.c |  6 +++---
 src/rpc/virnetdaemon.c | 10 +-
 src/rpc/virnetdaemon.h |  8 
 4 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/src/libvirt_remote.syms b/src/libvirt_remote.syms
index f0f90815cf..c4053fdcb2 100644
--- a/src/libvirt_remote.syms
+++ b/src/libvirt_remote.syms
@@ -94,7 +94,7 @@ virNetDaemonQuit;
 virNetDaemonQuitExecRestart;
 virNetDaemonRemoveShutdownInhibition;
 virNetDaemonRun;
-virNetDaemonSetShutdownCallbacks;
+virNetDaemonSetLifecycleCallbacks;
 virNetDaemonSetStateStopWorkerThread;
 virNetDaemonUpdateServices;
 
diff --git a/src/remote/remote_daemon.c b/src/remote/remote_daemon.c
index 2f6cef1828..9c0025a3bc 100644
--- a/src/remote/remote_daemon.c
+++ b/src/remote/remote_daemon.c
@@ -624,9 +624,9 @@ static void daemonRunStateInit(void *opaque)
 
 g_atomic_int_set(&driversInitialized, 1);
 
-virNetDaemonSetShutdownCallbacks(dmn,
- virStateShutdownPrepare,
- virStateShutdownWait);
+virNetDaemonSetLifecycleCallbacks(dmn,
+  virStateShutdownPrepare,
+  virStateShutdownWait);
 
 /* Signal for VM shutdown when desktop session is terminated, in
  * unprivileged daemons */
diff --git a/src/rpc/virnetdaemon.c b/src/rpc/virnetdaemon.c
index e6bdfe0f88..6d20edf28b 100644
--- a/src/rpc/virnetdaemon.c
+++ b/src/rpc/virnetdaemon.c
@@ -65,8 +65,8 @@ struct _virNetDaemon {
 GHashTable *servers;
 virJSONValue *srvObject;
 
-virNetDaemonShutdownCallback shutdownPrepareCb;
-virNetDaemonShutdownCallback shutdownWaitCb;
+virNetDaemonLifecycleCallback shutdownPrepareCb;
+virNetDaemonLifecycleCallback shutdownWaitCb;
 virThread *stateStopThread;
 int finishTimer;
 bool quit;
@@ -873,9 +873,9 @@ virNetDaemonHasClients(virNetDaemon *dmn)
 }
 
 void
-virNetDaemonSetShutdownCallbacks(virNetDaemon *dmn,
- virNetDaemonShutdownCallback prepareCb,
- virNetDaemonShutdownCallback waitCb)
+virNetDaemonSetLifecycleCallbacks(virNetDaemon *dmn,
+  virNetDaemonLifecycleCallback prepareCb,
+  virNetDaemonLifecycleCallback waitCb)
 {
 VIR_LOCK_GUARD lock = virObjectLockGuard(dmn);
 
diff --git a/src/rpc/virnetdaemon.h b/src/rpc/virnetdaemon.h
index 31a355adb4..0f4ebb6df7 100644
--- a/src/rpc/virnetdaemon.h
+++ b/src/rpc/virnetdaemon.h
@@ -82,8 +82,8 @@ ssize_t virNetDaemonGetServers(virNetDaemon *dmn, 
virNetServer ***servers);
 bool virNetDaemonHasServer(virNetDaemon *dmn,
const char *serverName);
 
-typedef int (*virNetDaemonShutdownCallback)(void);
+typedef int (*virNetDaemonLifecycleCallback)(void);
 
-void virNetDaemonSetShutdownCallbacks(virNetDaemon *dmn,
-  virNetDaemonShutdownCallback prepareCb,
-  virNetDaemonShutdownCallback waitCb);
+void virNetDaemonSetLifecycleCallbacks(virNetDaemon *dmn,
+   virNetDaemonLifecycleCallback prepareCb,
+   virNetDaemonLifecycleCallback waitCb);
-- 
2.48.1


[PATCH v2 14/22] qemu: add config to control if auto-shutdown VMs are restored

2025-03-12 Thread Daniel P . Berrangé
If shutting down running VMs at host shutdown, it can be useful to
automatically start them again on next boot. This adds a config
parameter 'auto_shutdown_restore', which defaults to enabled, which
leverages the autostart once feature.

Reviewed-by: Peter Krempa 
Signed-off-by: Daniel P. Berrangé 
---
 src/qemu/libvirtd_qemu.aug | 1 +
 src/qemu/qemu.conf.in  | 4 
 src/qemu/qemu_conf.c   | 3 +++
 src/qemu/qemu_conf.h   | 1 +
 src/qemu/qemu_driver.c | 1 +
 src/qemu/test_libvirtd_qemu.aug.in | 1 +
 6 files changed, 11 insertions(+)

diff --git a/src/qemu/libvirtd_qemu.aug b/src/qemu/libvirtd_qemu.aug
index 9fa6398d8d..ee39da73b9 100644
--- a/src/qemu/libvirtd_qemu.aug
+++ b/src/qemu/libvirtd_qemu.aug
@@ -102,6 +102,7 @@ module Libvirtd_qemu =
  | str_entry "auto_shutdown_try_shutdown"
  | str_entry "auto_shutdown_poweroff"
  | int_entry "auto_shutdown_wait"
+ | bool_entry "auto_shutdown_restore"
  | bool_entry "auto_save_bypass_cache"
 
let process_entry = str_entry "hugetlbfs_mount"
diff --git a/src/qemu/qemu.conf.in b/src/qemu/qemu.conf.in
index 5643e849ad..794b8cf31f 100644
--- a/src/qemu/qemu.conf.in
+++ b/src/qemu/qemu.conf.in
@@ -689,6 +689,10 @@
 # is currently 30 secs)
 #auto_shutdown_wait = 30
 
+# Whether VMs that are automatically powered off or saved during
+# host shutdown, should be set to restore on next boot
+#auto_shutdown_restore = 1
+
 # When a domain is configured to be auto-saved on shutdown, enabling
 # this flag has the same effect as using the VIR_DOMAIN_SAVE_BYPASS_CACHE
 # flag with the virDomainManagedSave API.  That is, the system will
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index 6161a3c8d6..fa6a9941b4 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -318,6 +318,7 @@ virQEMUDriverConfig *virQEMUDriverConfigNew(bool privileged,
 cfg->autoShutdownTryShutdown = 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_ALL;
 cfg->autoShutdownPoweroff = VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_ALL;
 }
+cfg->autoShutdownRestore = true;
 
 return g_steal_pointer(&cfg);
 }
@@ -732,6 +733,8 @@ virQEMUDriverConfigLoadSaveEntry(virQEMUDriverConfig *cfg,
 if (virConfGetValueUInt(conf, "auto_shutdown_wait",
 &cfg->autoShutdownWait) < 0)
 return -1;
+if (virConfGetValueBool(conf, "auto_shutdown_restore", 
&cfg->autoShutdownRestore) < 0)
+return -1;
 if (virConfGetValueBool(conf, "auto_save_bypass_cache",
 &cfg->autoSaveBypassCache) < 0)
 return -1;
diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h
index 06c917ba3e..d064c7bb9c 100644
--- a/src/qemu/qemu_conf.h
+++ b/src/qemu/qemu_conf.h
@@ -206,6 +206,7 @@ struct _virQEMUDriverConfig {
 virDomainDriverAutoShutdownScope autoShutdownTryShutdown;
 virDomainDriverAutoShutdownScope autoShutdownPoweroff;
 unsigned int autoShutdownWait;
+bool autoShutdownRestore;
 bool autoSaveBypassCache;
 
 char *lockManagerName;
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 3b74a98685..ccac2a1259 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -952,6 +952,7 @@ qemuStateStop(void)
 .poweroff = cfg->autoShutdownPoweroff,
 .waitShutdownSecs = cfg->autoShutdownWait,
 .saveBypassCache = cfg->autoSaveBypassCache,
+.autoRestore = cfg->autoShutdownRestore,
 };
 
 virDomainDriverAutoShutdown(&ascfg);
diff --git a/src/qemu/test_libvirtd_qemu.aug.in 
b/src/qemu/test_libvirtd_qemu.aug.in
index 65d0c20fe1..922cb35db7 100644
--- a/src/qemu/test_libvirtd_qemu.aug.in
+++ b/src/qemu/test_libvirtd_qemu.aug.in
@@ -80,6 +80,7 @@ module Test_libvirtd_qemu =
 { "auto_shutdown_try_shutdown" = "all" }
 { "auto_shutdown_poweroff" = "all" }
 { "auto_shutdown_wait" = "30" }
+{ "auto_shutdown_restore" = "1" }
 { "auto_save_bypass_cache" = "0" }
 { "hugetlbfs_mount" = "/dev/hugepages" }
 { "bridge_helper" = "qemu-bridge-helper" }
-- 
2.48.1


[PATCH v2 13/22] qemu: wire up support for once only autostart

2025-03-12 Thread Daniel P . Berrangé
Reviewed-by: Peter Krempa 
Signed-off-by: Daniel P. Berrangé 
---
 src/qemu/qemu_driver.c | 97 ++
 1 file changed, 97 insertions(+)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 3932640b79..3b74a98685 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -7775,6 +7775,101 @@ static int qemuDomainSetAutostart(virDomainPtr dom,
 }
 
 
+static int
+qemuDomainGetAutostartOnce(virDomainPtr dom,
+   int *autostart)
+{
+virDomainObj *vm;
+int ret = -1;
+
+if (!(vm = qemuDomainObjFromDomain(dom)))
+goto cleanup;
+
+if (virDomainGetAutostartOnceEnsureACL(dom->conn, vm->def) < 0)
+goto cleanup;
+
+*autostart = vm->autostartOnce;
+ret = 0;
+
+ cleanup:
+virDomainObjEndAPI(&vm);
+return ret;
+}
+
+static int
+qemuDomainSetAutostartOnce(virDomainPtr dom,
+   int autostart)
+{
+virQEMUDriver *driver = dom->conn->privateData;
+virDomainObj *vm;
+g_autofree char *configFile = NULL;
+g_autofree char *autostartLink = NULL;
+g_autofree char *autostartOnceLink = NULL;
+int ret = -1;
+g_autoptr(virQEMUDriverConfig) cfg = NULL;
+
+if (!(vm = qemuDomainObjFromDomain(dom)))
+return -1;
+
+cfg = virQEMUDriverGetConfig(driver);
+
+if (virDomainSetAutostartOnceEnsureACL(dom->conn, vm->def) < 0)
+goto cleanup;
+
+if (!vm->persistent) {
+virReportError(VIR_ERR_OPERATION_INVALID,
+   "%s", _("cannot set autostart for transient domain"));
+goto cleanup;
+}
+
+autostart = (autostart != 0);
+
+if (vm->autostartOnce != autostart) {
+if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0)
+goto cleanup;
+
+configFile = virDomainConfigFile(cfg->configDir, vm->def->name);
+autostartLink = virDomainConfigFile(cfg->autostartDir, vm->def->name);
+autostartOnceLink = g_strdup_printf("%s.once", autostartLink);
+
+if (autostart) {
+if (g_mkdir_with_parents(cfg->autostartDir, 0777) < 0) {
+virReportSystemError(errno,
+ _("cannot create autostart directory 
%1$s"),
+ cfg->autostartDir);
+goto endjob;
+}
+
+if (symlink(configFile, autostartOnceLink) < 0) {
+virReportSystemError(errno,
+ _("Failed to create symlink '%1$s' to 
'%2$s'"),
+ autostartOnceLink, configFile);
+goto endjob;
+}
+} else {
+if (unlink(autostartOnceLink) < 0 &&
+errno != ENOENT &&
+errno != ENOTDIR) {
+virReportSystemError(errno,
+ _("Failed to delete symlink '%1$s'"),
+ autostartOnceLink);
+goto endjob;
+}
+}
+
+vm->autostartOnce = autostart;
+
+ endjob:
+virDomainObjEndJob(vm);
+}
+ret = 0;
+
+ cleanup:
+virDomainObjEndAPI(&vm);
+return ret;
+}
+
+
 static char *qemuDomainGetSchedulerType(virDomainPtr dom,
 int *nparams)
 {
@@ -20186,6 +20281,8 @@ static virHypervisorDriver qemuHypervisorDriver = {
 .domainSetLaunchSecurityState = qemuDomainSetLaunchSecurityState, /* 8.0.0 
*/
 .domainFDAssociate = qemuDomainFDAssociate, /* 9.0.0 */
 .domainGraphicsReload = qemuDomainGraphicsReload, /* 10.2.0 */
+.domainGetAutostartOnce = qemuDomainGetAutostartOnce, /* 11.2.0 */
+.domainSetAutostartOnce = qemuDomainSetAutostartOnce, /* 11.2.0 */
 };
 
 
-- 
2.48.1


Re: [PATCH v8 09/18] qemu: Implement qemu driver for throttle API

2025-03-12 Thread Peter Krempa
On Wed, Feb 19, 2025 at 22:27:13 +0530, Harikumar Rajkumar wrote:
> From: Chun Feng Wu 
> 
> ThrottleGroup lifecycle implementation, note, in QOM, throttlegroup name is 
> prefixed with
> "throttle-" to clearly separate throttle group objects into their own 
> namespace.
> * "qemuDomainSetThrottleGroup", this method is to add("object-add") or 
> update("qom-set")
>   throttlegroup in QOM and update corresponding objects in DOM
> * "qemuDomainGetThrottleGroup", this method queries throttlegroup info by 
> groupname
> * "qemuDomainDelThrottleGroup", this method checks if group is referenced by 
> any throttle
>   in disks and delete it if it's not used anymore
> * Check flag "QEMU_CAPS_OBJECT_JSON" during qemuDomainSetThrottleGroup when 
> vm is active,
>   throttle group feature requries such flag
> * "objectAddNoWrap"("props") check is done by reusing qemuMonitorAddObject in
>   qemuDomainSetThrottleGroup
> 
> Signed-off-by: Chun Feng Wu 
> 
> * Apply suggested coding style changes.
> * cleanup qemu Get ThrottleGroup.
> * Update the version to 11.1.0.
> 
> Signed-off-by: Harikumar Rajkumar 
> ---
>  src/qemu/qemu_driver.c | 243 +
>  1 file changed, 243 insertions(+)

[...]

> +priv = vm->privateData;
> +qemuCaps = priv->qemuCaps;
> +if (virDomainObjIsActive(vm)) {
> +if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_OBJECT_JSON)) {
> +virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
> +   _("This QEMU doesn't support throttle group 
> creation"));
> +return -1;
> +}

While looking at this I noticed that QEMU_CAPS_OBJECT_JSON is no longer
needed as it's always present with qemu versions we currently support:

https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/message/7ROU6X7QLFZ2UOHY23ZUZHDVKHZST6IV/

Once the patches are reviewed I'll rebase this on top and delete this.


[PATCH v2 19/22] rpc: fix shutdown sequence when preserving state

2025-03-12 Thread Daniel P . Berrangé
The preserving of state (ie running VMs) requires a fully functional
daemon and hypervisor driver. If any part has started shutting down
then saving state may fail, or worse, hang.

The current shutdown sequence does not guarantee safe ordering, as
we synchronize with the state saving thread only after the hypervisor
driver has had its 'shutdownPrepare' callback invoked. In the case of
QEMU this means that worker threads processing monitor events may well
have been stopped.

This implements a full state machine that has a well defined ordering
that an earlier commit documented as the desired semantics.

With this change, nothing will start shutting down if the state saving
thread is still running.

Signed-off-by: Daniel P. Berrangé 
---
 src/rpc/virnetdaemon.c | 107 ++---
 1 file changed, 80 insertions(+), 27 deletions(-)

diff --git a/src/rpc/virnetdaemon.c b/src/rpc/virnetdaemon.c
index 469a1d3ae2..53dee60703 100644
--- a/src/rpc/virnetdaemon.c
+++ b/src/rpc/virnetdaemon.c
@@ -39,6 +39,21 @@
 
 VIR_LOG_INIT("rpc.netdaemon");
 
+/* The daemon shutdown process will step through
+ * each state listed below in the order they are
+ * declared. The 'STOPPING' state may be skipped
+ * over if the stateStopThread is not required
+ * for the particular shutdown scenari
+ */
+typedef enum {
+VIR_NET_DAEMON_QUIT_NONE,  /* Running normally */
+VIR_NET_DAEMON_QUIT_REQUESTED, /* Daemon shutdown requested */
+VIR_NET_DAEMON_QUIT_STOPPING, /* stateStopThread is running, so shutdown 
cannot request must be delayed */
+VIR_NET_DAEMON_QUIT_READY, /* Ready to initiate shutdown request by 
calling shutdownPrepareCb */
+VIR_NET_DAEMON_QUIT_WAITING, /* shutdownWaitCb is running, waiting for it 
to finished */
+VIR_NET_DAEMON_QUIT_COMPLETED, /* shutdownWaitCb is finished, event loop 
will now terminate */
+} virNetDaemonQuitPhase;
+
 #ifndef WIN32
 typedef struct _virNetDaemonSignal virNetDaemonSignal;
 struct _virNetDaemonSignal {
@@ -69,9 +84,8 @@ struct _virNetDaemon {
 virNetDaemonLifecycleCallback shutdownPrepareCb;
 virNetDaemonLifecycleCallback shutdownWaitCb;
 virThread *stateStopThread;
-int finishTimer;
-bool quit;
-bool finished;
+int quitTimer;
+virNetDaemonQuitPhase quit;
 bool graceful;
 bool execRestart;
 bool running; /* the daemon has reached the running phase */
@@ -414,7 +428,10 @@ virNetDaemonAutoShutdownTimer(int timerid G_GNUC_UNUSED,
 
 if (!dmn->autoShutdownInhibitions) {
 VIR_DEBUG("Automatic shutdown triggered");
-dmn->quit = true;
+if (dmn->quit == VIR_NET_DAEMON_QUIT_NONE) {
+VIR_DEBUG("Requesting daemon shutdown");
+dmn->quit = VIR_NET_DAEMON_QUIT_REQUESTED;
+}
 }
 }
 
@@ -709,27 +726,26 @@ daemonShutdownWait(void *opaque)
 bool graceful = false;
 
 virHashForEach(dmn->servers, daemonServerShutdownWait, NULL);
-if (!dmn->shutdownWaitCb || dmn->shutdownWaitCb() >= 0) {
-if (dmn->stateStopThread)
-virThreadJoin(dmn->stateStopThread);
-
+if (!dmn->shutdownWaitCb || dmn->shutdownWaitCb() >= 0)
 graceful = true;
-}
 
 VIR_WITH_OBJECT_LOCK_GUARD(dmn) {
 dmn->graceful = graceful;
-virEventUpdateTimeout(dmn->finishTimer, 0);
+dmn->quit = VIR_NET_DAEMON_QUIT_COMPLETED;
+virEventUpdateTimeout(dmn->quitTimer, 0);
+VIR_DEBUG("Shutdown wait completed graceful=%d", graceful);
 }
 }
 
 static void
-virNetDaemonFinishTimer(int timerid G_GNUC_UNUSED,
-void *opaque)
+virNetDaemonQuitTimer(int timerid G_GNUC_UNUSED,
+  void *opaque)
 {
 virNetDaemon *dmn = opaque;
 VIR_LOCK_GUARD lock = virObjectLockGuard(dmn);
 
-dmn->finished = true;
+dmn->quit = VIR_NET_DAEMON_QUIT_COMPLETED;
+VIR_DEBUG("Shutdown wait timed out");
 }
 
 
@@ -746,9 +762,8 @@ virNetDaemonRun(virNetDaemon *dmn)
 goto cleanup;
 }
 
-dmn->quit = false;
-dmn->finishTimer = -1;
-dmn->finished = false;
+dmn->quit = VIR_NET_DAEMON_QUIT_NONE;
+dmn->quitTimer = -1;
 dmn->graceful = false;
 dmn->running = true;
 
@@ -757,7 +772,7 @@ virNetDaemonRun(virNetDaemon *dmn)
 virSystemdNotifyReady();
 
 VIR_DEBUG("dmn=%p quit=%d", dmn, dmn->quit);
-while (!dmn->finished) {
+while (dmn->quit != VIR_NET_DAEMON_QUIT_COMPLETED) {
 virNetDaemonShutdownTimerUpdate(dmn);
 
 virObjectUnlock(dmn);
@@ -771,17 +786,30 @@ virNetDaemonRun(virNetDaemon *dmn)
 virHashForEach(dmn->servers, daemonServerProcessClients, NULL);
 
 /* don't shutdown services when performing an exec-restart */
-if (dmn->quit && dmn->execRestart)
+if (dmn->quit == VIR_NET_DAEMON_QUIT_REQUESTED && dmn->execRestart)
 goto cleanup;
 
-if (dmn->quit && dmn->finishTimer == -1) {
+if (dmn->quit == VIR_NET_DAEMON_QUIT_REQUESTED) {
+VIR_DEBUG("Process 

[PATCH v2 22/22] hypervisor: emit systemd status & log messages while saving

2025-03-12 Thread Daniel P . Berrangé
Since processing running VMs on OS shutdown can take a while, it is
beneficial to send systemd status messages about the progress.

The systemd status is a point-in-time message, with no ability to
look at the history of received messages. So in the systemd status
we include the progress information. For the same reason there is
no benefit in sending failure messages, as they'll disappear as soon
as a status is sent for the subsequent VM to be processed.

The libvirt log statements can be viewed as a complete log record
so don't need progress info, but do include warnings about failures
(present from earlier commits).

Signed-off-by: Daniel P. Berrangé 
---
 src/hypervisor/domain_driver.c | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/src/hypervisor/domain_driver.c b/src/hypervisor/domain_driver.c
index 1105d36388..c7ffd21f05 100644
--- a/src/hypervisor/domain_driver.c
+++ b/src/hypervisor/domain_driver.c
@@ -30,6 +30,7 @@
 #include "datatypes.h"
 #include "driver.h"
 #include "virlog.h"
+#include "virsystemd.h"
 
 #define VIR_FROM_THIS VIR_FROM_DOMAIN
 
@@ -814,6 +815,10 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 (!transient[i] && cfg->trySave == 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_TRANSIENT))
 continue;
 
+virSystemdNotifyStatus("Suspending '%s' (%zu of %d)",
+   virDomainGetName(domains[i]), i + 1, 
numDomains);
+VIR_INFO("Suspending '%s'", virDomainGetName(domains[i]));
+
 /*
  * Pause all VMs to make them stop dirtying pages,
  * so save is quicker. We remember if any VMs were
@@ -832,6 +837,10 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 }
 
 for (i = 0; i < numDomains; i++) {
+virSystemdNotifyStatus("Saving '%s' (%zu of %d)",
+   virDomainGetName(domains[i]), i + 1, 
numDomains);
+VIR_INFO("Saving '%s'", virDomainGetName(domains[i]));
+
 if (virDomainManagedSave(domains[i], flags[i]) < 0) {
 VIR_WARN("Unable to perform managed save of '%s': %s",
  virDomainGetName(domains[i]),
@@ -855,6 +864,10 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 (!transient[i] && cfg->tryShutdown == 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_TRANSIENT))
 continue;
 
+virSystemdNotifyStatus("Shutting down '%s' (%zu of %d)",
+   virDomainGetName(domains[i]), i + 1, 
numDomains);
+VIR_INFO("Shutting down '%s'", virDomainGetName(domains[i]));
+
 if (virDomainShutdown(domains[i]) < 0) {
 VIR_WARN("Unable to request graceful shutdown of '%s': %s",
  virDomainGetName(domains[i]),
@@ -864,6 +877,9 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 }
 
 timer = g_timer_new();
+virSystemdNotifyStatus("Waiting %d secs for VM shutdown completion",
+   cfg->waitShutdownSecs);
+VIR_INFO("Waiting %d secs for VM shutdown completion", 
cfg->waitShutdownSecs);
 while (1) {
 bool anyRunning = false;
 for (i = 0; i < numDomains; i++) {
@@ -900,6 +916,9 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 (!transient[i] && cfg->poweroff == 
VIR_DOMAIN_DRIVER_AUTO_SHUTDOWN_SCOPE_TRANSIENT))
 continue;
 
+virSystemdNotifyStatus("Destroying '%s' (%zu of %d)",
+   virDomainGetName(domains[i]), i + 1, 
numDomains);
+VIR_INFO("Destroying '%s'", virDomainGetName(domains[i]));
 /*
  * NB might fail if we gave up on waiting for
  * virDomainShutdown, but it then completed anyway,
@@ -912,6 +931,9 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 }
 }
 
+virSystemdNotifyStatus("Processed %d domains", numDomains);
+VIR_INFO("Processed %d domains", numDomains);
+
  cleanup:
 if (domains) {
 /* Anything non-NULL in this list indicates none of
-- 
2.48.1


[PATCH v2 11/22] conf: implement support for autostart once feature

2025-03-12 Thread Daniel P . Berrangé
This is maintained in the same way as the autostart flag, using a
symlink. The difference is that instead of '.xml', the symlink
suffix is '.xml.once'. The link is also deleted immediately after
it has been read.

Signed-off-by: Daniel P. Berrangé 
---
 src/conf/domain_conf.c |  7 ++-
 src/conf/domain_conf.h |  2 ++
 src/conf/virdomainobjlist.c|  8 
 src/hypervisor/domain_driver.c | 14 +++---
 4 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index b94cf99236..cf8374ab18 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -4166,6 +4166,7 @@ static void virDomainObjDispose(void *obj)
 virDomainCheckpointObjListFree(dom->checkpoints);
 virDomainJobObjFree(dom->job);
 virObjectUnref(dom->closecallbacks);
+g_free(dom->autostartOnceLink);
 }
 
 virDomainObj *
@@ -29135,13 +29136,17 @@ virDomainDeleteConfig(const char *configDir,
 {
 g_autofree char *configFile = NULL;
 g_autofree char *autostartLink = NULL;
+g_autofree char *autostartOnceLink = NULL;
 
 configFile = virDomainConfigFile(configDir, dom->def->name);
 autostartLink = virDomainConfigFile(autostartDir, dom->def->name);
+autostartOnceLink = g_strdup_printf("%s.once", autostartLink);
 
-/* Not fatal if this doesn't work */
+/* Not fatal if these don't work */
 unlink(autostartLink);
+unlink(autostartOnceLink);
 dom->autostart = 0;
+dom->autostartOnce = 0;
 
 if (unlink(configFile) < 0 &&
 errno != ENOENT) {
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index 3a97fd866c..c648864083 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -3324,9 +3324,11 @@ struct _virDomainObj {
 virDomainStateReason state;
 
 unsigned int autostart : 1;
+unsigned int autostartOnce : 1;
 unsigned int persistent : 1;
 unsigned int updated : 1;
 unsigned int removing : 1;
+char *autostartOnceLink;
 
 virDomainDef *def; /* The current definition */
 virDomainDef *newDef; /* New definition to activate at shutdown */
diff --git a/src/conf/virdomainobjlist.c b/src/conf/virdomainobjlist.c
index 72207450c5..90efb3465c 100644
--- a/src/conf/virdomainobjlist.c
+++ b/src/conf/virdomainobjlist.c
@@ -487,9 +487,11 @@ virDomainObjListLoadConfig(virDomainObjList *doms,
 {
 g_autofree char *configFile = NULL;
 g_autofree char *autostartLink = NULL;
+g_autofree char *autostartOnceLink = NULL;
 g_autoptr(virDomainDef) def = NULL;
 virDomainObj *dom;
 int autostart;
+int autostartOnce;
 g_autoptr(virDomainDef) oldDef = NULL;
 
 configFile = virDomainConfigFile(configDir, name);
@@ -500,13 +502,19 @@ virDomainObjListLoadConfig(virDomainObjList *doms,
 return NULL;
 
 autostartLink = virDomainConfigFile(autostartDir, name);
+autostartOnceLink = g_strdup_printf("%s.once", autostartLink);
 
 autostart = virFileLinkPointsTo(autostartLink, configFile);
+autostartOnce = virFileLinkPointsTo(autostartOnceLink, configFile);
 
 if (!(dom = virDomainObjListAddLocked(doms, &def, xmlopt, 0, &oldDef)))
 return NULL;
 
 dom->autostart = autostart;
+dom->autostartOnce = autostartOnce;
+
+if (autostartOnce)
+dom->autostartOnceLink = g_steal_pointer(&autostartOnceLink);
 
 if (notify)
 (*notify)(dom, oldDef == NULL, opaque);
diff --git a/src/hypervisor/domain_driver.c b/src/hypervisor/domain_driver.c
index d4cf09174b..bfba435ee0 100644
--- a/src/hypervisor/domain_driver.c
+++ b/src/hypervisor/domain_driver.c
@@ -682,10 +682,12 @@ virDomainDriverAutoStartOne(virDomainObj *vm,
 virObjectLock(vm);
 virObjectRef(vm);
 
-VIR_DEBUG("Autostart %s: autostart=%d",
-  vm->def->name, vm->autostart);
+VIR_DEBUG("Autostart %s: autostart=%d autostartOnce=%d 
autostartOnceLink=%s",
+  vm->def->name, vm->autostart, vm->autostartOnce,
+  NULLSTR(vm->autostartOnceLink));
 
-if (vm->autostart && !virDomainObjIsActive(vm)) {
+if ((vm->autostart || vm->autostartOnce) &&
+!virDomainObjIsActive(vm)) {
 virResetLastError();
 if (state->cfg->delayMS) {
 if (!state->first) {
@@ -696,6 +698,12 @@ virDomainDriverAutoStartOne(virDomainObj *vm,
 }
 
 state->cfg->callback(vm, state->cfg->opaque);
+vm->autostartOnce = 0;
+}
+
+if (vm->autostartOnceLink) {
+unlink(vm->autostartOnceLink);
+g_clear_pointer(&vm->autostartOnceLink, g_free);
 }
 
 virDomainObjEndAPI(&vm);
-- 
2.48.1


[PATCH v2 17/22] rpc: move state stop into virNetDaemon class

2025-03-12 Thread Daniel P . Berrangé
Currently the remote daemon code is responsible for calling virStateStop
in a background thread. The virNetDaemon code wants to synchronize with
this during shutdown, however, so the virThreadPtr must be passed over.

Even the limited synchronization done currently, however, is flawed and
to fix this requires the virNetDaemon code to be responsible for calling
virStateStop in a thread more directly.

Thus the logic is moved over into virStateStop via a further callback
to be registered by the remote daemon.

Signed-off-by: Daniel P. Berrangé 
---
 src/libvirt_remote.syms|  2 +-
 src/remote/remote_daemon.c | 40 ++
 src/rpc/virnetdaemon.c | 59 +++---
 src/rpc/virnetdaemon.h | 27 +++--
 4 files changed, 77 insertions(+), 51 deletions(-)

diff --git a/src/libvirt_remote.syms b/src/libvirt_remote.syms
index c4053fdcb2..9dcc2792c1 100644
--- a/src/libvirt_remote.syms
+++ b/src/libvirt_remote.syms
@@ -95,7 +95,7 @@ virNetDaemonQuitExecRestart;
 virNetDaemonRemoveShutdownInhibition;
 virNetDaemonRun;
 virNetDaemonSetLifecycleCallbacks;
-virNetDaemonSetStateStopWorkerThread;
+virNetDaemonStop;
 virNetDaemonUpdateServices;
 
 
diff --git a/src/remote/remote_daemon.c b/src/remote/remote_daemon.c
index 9c0025a3bc..16dc1d86f7 100644
--- a/src/remote/remote_daemon.c
+++ b/src/remote/remote_daemon.c
@@ -514,41 +514,6 @@ static void daemonInhibitCallback(bool inhibit, void 
*opaque)
 static GDBusConnection *sessionBus;
 static GDBusConnection *systemBus;
 
-static void daemonStopWorker(void *opaque)
-{
-virNetDaemon *dmn = opaque;
-
-VIR_DEBUG("Begin stop dmn=%p", dmn);
-
-ignore_value(virStateStop());
-
-VIR_DEBUG("Completed stop dmn=%p", dmn);
-
-/* Exit daemon cleanly */
-virNetDaemonQuit(dmn);
-}
-
-
-/* We do this in a thread to not block the main loop */
-static void daemonStop(virNetDaemon *dmn)
-{
-virThread *thr;
-virObjectRef(dmn);
-
-thr = g_new0(virThread, 1);
-
-if (virThreadCreateFull(thr, true,
-daemonStopWorker,
-"daemon-stop", false, dmn) < 0) {
-virObjectUnref(dmn);
-g_free(thr);
-return;
-}
-
-virNetDaemonSetStateStopWorkerThread(dmn, &thr);
-}
-
-
 static GDBusMessage *
 handleSessionMessageFunc(GDBusConnection *connection G_GNUC_UNUSED,
  GDBusMessage *message,
@@ -562,7 +527,7 @@ handleSessionMessageFunc(GDBusConnection *connection 
G_GNUC_UNUSED,
 if (virGDBusMessageIsSignal(message,
 "org.freedesktop.DBus.Local",
 "Disconnected"))
-daemonStop(dmn);
+virNetDaemonStop(dmn);
 
 return message;
 }
@@ -581,7 +546,7 @@ handleSystemMessageFunc(GDBusConnection *connection 
G_GNUC_UNUSED,
 
 VIR_DEBUG("dmn=%p", dmn);
 
-daemonStop(dmn);
+virNetDaemonStop(dmn);
 }
 
 
@@ -625,6 +590,7 @@ static void daemonRunStateInit(void *opaque)
 g_atomic_int_set(&driversInitialized, 1);
 
 virNetDaemonSetLifecycleCallbacks(dmn,
+  virStateStop,
   virStateShutdownPrepare,
   virStateShutdownWait);
 
diff --git a/src/rpc/virnetdaemon.c b/src/rpc/virnetdaemon.c
index 6d20edf28b..43a75b330f 100644
--- a/src/rpc/virnetdaemon.c
+++ b/src/rpc/virnetdaemon.c
@@ -65,6 +65,7 @@ struct _virNetDaemon {
 GHashTable *servers;
 virJSONValue *srvObject;
 
+virNetDaemonLifecycleCallback stopCb;
 virNetDaemonLifecycleCallback shutdownPrepareCb;
 virNetDaemonLifecycleCallback shutdownWaitCb;
 virThread *stateStopThread;
@@ -793,8 +794,10 @@ virNetDaemonRun(virNetDaemon *dmn)
 }
 }
 
+VIR_DEBUG("Main loop exited");
 if (dmn->graceful) {
 virThreadJoin(&shutdownThread);
+VIR_DEBUG("Graceful shutdown complete");
 } else {
 VIR_WARN("Make forcefull daemon shutdown");
 exit(EXIT_FAILURE);
@@ -806,34 +809,61 @@ virNetDaemonRun(virNetDaemon *dmn)
 
 
 void
-virNetDaemonSetStateStopWorkerThread(virNetDaemon *dmn,
- virThread **thr)
+virNetDaemonQuit(virNetDaemon *dmn)
 {
 VIR_LOCK_GUARD lock = virObjectLockGuard(dmn);
 
-VIR_DEBUG("Setting state stop worker thread on dmn=%p to thr=%p", dmn, 
thr);
-dmn->stateStopThread = g_steal_pointer(thr);
+VIR_DEBUG("Quit requested %p", dmn);
+dmn->quit = true;
 }
 
 
 void
-virNetDaemonQuit(virNetDaemon *dmn)
+virNetDaemonQuitExecRestart(virNetDaemon *dmn)
 {
 VIR_LOCK_GUARD lock = virObjectLockGuard(dmn);
 
-VIR_DEBUG("Quit requested %p", dmn);
+VIR_DEBUG("Exec-restart requested %p", dmn);
 dmn->quit = true;
+dmn->execRestart = true;
+}
+
+
+static void
+virNetDaemonStopWorker(void *opaque)
+{
+virNetDaemon *dmn = opaque;
+
+VIR_DEBUG("Begin stop dmn=%p", dmn);
+
+dmn->stopCb();
+
+VIR_DEBUG("Comple

[PATCH v2 15/22] src: clarify semantics of the various virStateNNN methods

2025-03-12 Thread Daniel P . Berrangé
It is not documented what the various virStateNNN methods are each
responsible for doing and the names give little guidance either.
Provide some useful documentation comments to explain the intended
usage of each.

Signed-off-by: Daniel P. Berrangé 
---
 src/libvirt.c | 44 +++-
 1 file changed, 39 insertions(+), 5 deletions(-)

diff --git a/src/libvirt.c b/src/libvirt.c
index 1d37696d6f..581fc6deea 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -688,7 +688,13 @@ virStateInitialize(bool privileged,
 /**
  * virStateShutdownPrepare:
  *
- * Run each virtualization driver's shutdown prepare method.
+ * Tell each driver to prepare for shutdown of the daemon. This should
+ * trigger any actions required to stop processing background work.
+ *
+ * This method is called directly from the main event loop thread, so
+ * the event loop will not execute while this method is running. After
+ * this method returns, the event loop will continue running until
+ * the virStateShutdownWait method completes.
  *
  * Returns 0 if all succeed, -1 upon any failure.
  */
@@ -709,7 +715,12 @@ virStateShutdownPrepare(void)
 /**
  * virStateShutdownWait:
  *
- * Run each virtualization driver's shutdown wait method.
+ * Tell each driver to finalize any work required to enable
+ * graceful shutdown of the daemon. This method is invoked
+ * from a background thread, and when it completes, the event
+ * loop will terminate. As such drivers can register callbacks
+ * that will prevent the event loop terminating until actions
+ * initiated by virStateShutdownPrepare are complete.
  *
  * Returns 0 if all succeed, -1 upon any failure.
  */
@@ -730,7 +741,12 @@ virStateShutdownWait(void)
 /**
  * virStateCleanup:
  *
- * Run each virtualization driver's cleanup method.
+ * Tell each driver to release all resources it holds in
+ * order to fully shutdown the daemon. When this is called
+ * the event loop will no longer be running. Thus any
+ * cleanup that depends on execution of the event loop
+ * must have been triggered by the virStateShutdownPrepare
+ * method.
  *
  * Returns 0 if all succeed, -1 upon any failure.
  */
@@ -752,7 +768,8 @@ virStateCleanup(void)
 /**
  * virStateReload:
  *
- * Run each virtualization driver's reload method.
+ * Tell each driver to reload their global configuration
+ * file(s).
  *
  * Returns 0 if all succeed, -1 upon any failure.
  */
@@ -774,7 +791,24 @@ virStateReload(void)
 /**
  * virStateStop:
  *
- * Run each virtualization driver's "stop" method.
+ * Tell each driver to prepare for system/session stop.
+ *
+ * In an unprivileged daemon, this indicates that the
+ * current user's primary login session is about to
+ * be terminated.
+ *
+ * In a privileged daemon, this indicates that the host
+ * OS is about to shutdown.
+ *
+ * This is a signal that the driver should stop and/or
+ * preserve any resources affected by the system/session
+ * stop.
+ *
+ * On host OS stop there is a very short wait for the
+ * stop action to complete. For any prolonged tasks
+ * the driver must acquire inhibitor locks, or send
+ * a request to systemd to extend the shutdown wait
+ * timeout.
  *
  * Returns 0 if successful, -1 on failure
  */
-- 
2.48.1


[PATCH v2 21/22] rpc: don't let systemd shutdown daemon while saving VMs

2025-03-12 Thread Daniel P . Berrangé
The service unit "TimeoutStopSec" setting controls how long systemd
waits for a service to stop before aggressively killing it, defaulting
to 30 seconds if not set.

When we're processing shutdown of VMs in response to OS shutdown, we
very likely need more than 30 seconds to complete this job, and can
not stop the daemon during this time.

To avoid being prematurely killed, setup a timer that repeatedly
extends the "TimeoutStopSec" value while stop of running VMs is
arranged.

This does mean if libvirt hangs while stoppping VMs, systemd won't
get to kill the libvirt daemon, but this is considered less harmful
that forcefully killing running VMs.

Signed-off-by: Daniel P. Berrangé 
---
 src/rpc/virnetdaemon.c | 53 +-
 1 file changed, 52 insertions(+), 1 deletion(-)

diff --git a/src/rpc/virnetdaemon.c b/src/rpc/virnetdaemon.c
index 53dee60703..944a832ea8 100644
--- a/src/rpc/virnetdaemon.c
+++ b/src/rpc/virnetdaemon.c
@@ -84,6 +84,7 @@ struct _virNetDaemon {
 virNetDaemonLifecycleCallback shutdownPrepareCb;
 virNetDaemonLifecycleCallback shutdownWaitCb;
 virThread *stateStopThread;
+int stopTimer;
 int quitTimer;
 virNetDaemonQuitPhase quit;
 bool graceful;
@@ -99,6 +100,25 @@ struct _virNetDaemon {
 
 static virClass *virNetDaemonClass;
 
+/*
+ * When running state stop operation which can be slow...
+ *
+ * How frequently we tell systemd to extend our stop time,
+ * and how much we ask for each time. The latter should
+ * exceed the former with a decent tolerance for high load
+ * scenarios
+ */
+#define VIR_NET_DAEMON_STOP_EXTEND_INTERVAL_MSEC (5 * 1000)
+#define VIR_NET_DAEMON_STOP_EXTRA_TIME_SEC 10
+
+/*
+ * When running daemon shutdown synchronization which
+ * ought to be moderately fast
+ */
+#define VIR_NET_DAEMON_SHUTDOWN_TIMEOUT_SEC 30
+#define VIR_NET_DAEMON_SHUTDOWN_TIMEOUT_MSEC 
(VIR_NET_DAEMON_SHUTDOWN_TIMEOUT_SEC * 1000)
+
+
 static int
 daemonServerClose(void *payload,
   const char *key G_GNUC_UNUSED,
@@ -168,6 +188,7 @@ virNetDaemonNew(void)
 if (virEventRegisterDefaultImpl() < 0)
 goto error;
 
+dmn->stopTimer = -1;
 dmn->autoShutdownTimerID = -1;
 
 #ifndef WIN32
@@ -737,6 +758,23 @@ daemonShutdownWait(void *opaque)
 }
 }
 
+static void
+virNetDaemonStopTimer(int timerid G_GNUC_UNUSED,
+  void *opaque)
+{
+virNetDaemon *dmn = opaque;
+VIR_LOCK_GUARD lock = virObjectLockGuard(dmn);
+
+if (dmn->quit != VIR_NET_DAEMON_QUIT_STOPPING)
+return;
+
+VIR_DEBUG("Extending stop timeout %u",
+  VIR_NET_DAEMON_STOP_EXTRA_TIME_SEC);
+
+virSystemdNotifyExtendTimeout(VIR_NET_DAEMON_STOP_EXTRA_TIME_SEC);
+}
+
+
 static void
 virNetDaemonQuitTimer(int timerid G_GNUC_UNUSED,
   void *opaque)
@@ -791,11 +829,19 @@ virNetDaemonRun(virNetDaemon *dmn)
 
 if (dmn->quit == VIR_NET_DAEMON_QUIT_REQUESTED) {
 VIR_DEBUG("Process quit request");
+virSystemdNotifyStopping();
 virHashForEach(dmn->servers, daemonServerClose, NULL);
 
 if (dmn->stateStopThread) {
 VIR_DEBUG("State stop thread running");
 dmn->quit = VIR_NET_DAEMON_QUIT_STOPPING;
+
virSystemdNotifyExtendTimeout(VIR_NET_DAEMON_STOP_EXTRA_TIME_SEC);
+if ((dmn->stopTimer = 
virEventAddTimeout(VIR_NET_DAEMON_STOP_EXTEND_INTERVAL_MSEC,
+ virNetDaemonStopTimer,
+ dmn, NULL)) < 0) {
+VIR_WARN("Failed to register stop timer");
+/* hope for the best */
+}
 } else {
 VIR_DEBUG("Ready to shutdown");
 dmn->quit = VIR_NET_DAEMON_QUIT_READY;
@@ -807,7 +853,8 @@ virNetDaemonRun(virNetDaemon *dmn)
 if (dmn->shutdownPrepareCb && dmn->shutdownPrepareCb() < 0)
 break;
 
-if ((dmn->quitTimer = virEventAddTimeout(30 * 1000,
+virSystemdNotifyExtendTimeout(VIR_NET_DAEMON_SHUTDOWN_TIMEOUT_SEC);
+if ((dmn->quitTimer = 
virEventAddTimeout(VIR_NET_DAEMON_SHUTDOWN_TIMEOUT_MSEC,
  virNetDaemonQuitTimer,
  dmn, NULL)) < 0) {
 VIR_WARN("Failed to register finish timer.");
@@ -879,6 +926,10 @@ virNetDaemonStopWorker(void *opaque)
 dmn->quit = VIR_NET_DAEMON_QUIT_READY;
 }
 g_clear_pointer(&dmn->stateStopThread, g_free);
+if (dmn->stopTimer != -1) {
+virEventRemoveTimeout(dmn->stopTimer);
+dmn->stopTimer = -1;
+}
 }
 
 VIR_DEBUG("End stop dmn=%p", dmn);
-- 
2.48.1


[PATCH v2 18/22] rpc: don't unconditionally quit after preserving state

2025-03-12 Thread Daniel P . Berrangé
The call to preserve state (ie running VMs) is triggered in response to
the desktop session dbus terminating (session daemon), or logind sending
a "PrepareForShutdown" signal. In the case of the latter, daemons
should only save their state, not actually exit yet. Other things on the
system may still expect the daemon to be running at this stage.

Reviewed-by: Peter Krempa 
Signed-off-by: Daniel P. Berrangé 
---
 src/remote/remote_daemon.c | 4 +++-
 src/rpc/virnetdaemon.c | 1 -
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/src/remote/remote_daemon.c b/src/remote/remote_daemon.c
index 16dc1d86f7..1424d4cf5e 100644
--- a/src/remote/remote_daemon.c
+++ b/src/remote/remote_daemon.c
@@ -526,8 +526,10 @@ handleSessionMessageFunc(GDBusConnection *connection 
G_GNUC_UNUSED,
 
 if (virGDBusMessageIsSignal(message,
 "org.freedesktop.DBus.Local",
-"Disconnected"))
+"Disconnected")) {
 virNetDaemonStop(dmn);
+virNetDaemonQuit(dmn);
+}
 
 return message;
 }
diff --git a/src/rpc/virnetdaemon.c b/src/rpc/virnetdaemon.c
index 43a75b330f..469a1d3ae2 100644
--- a/src/rpc/virnetdaemon.c
+++ b/src/rpc/virnetdaemon.c
@@ -840,7 +840,6 @@ virNetDaemonStopWorker(void *opaque)
 
 VIR_DEBUG("Completed stop dmn=%p", dmn);
 
-virNetDaemonQuit(dmn);
 virObjectUnref(dmn);
 }
 
-- 
2.48.1


[PATCH v2 20/22] admin: add 'daemon-shutdown' command

2025-03-12 Thread Daniel P . Berrangé
The daemons are wired up to shutdown in responsible to UNIX process
signals, as well as in response to login1 dbus signals, or loss of
desktop session. The latter two options can optionally preserve state
(ie running VMs).

In non-systemd environments, as well as for testing, it would be useful
to have a way to trigger shutdown with state preservation more directly.

Thus a new admin protocol API is introduced

  virAdmConnectDaemonShutdown

which will trigger a daemon shutdown, and preserve running VMs if the
VIR_DAEMON_SHUTDOWN_PRESERVE flag is set.

It has a corresponding 'virt-admin daemon-shutdown [--preserve]' command
binding.

Signed-off-by: Daniel P. Berrangé 
---
 docs/manpages/virt-admin.rst| 13 +
 include/libvirt/libvirt-admin.h | 13 +
 src/admin/admin_protocol.x  | 11 +++-
 src/admin/admin_server_dispatch.c   | 13 +
 src/admin/libvirt-admin.c   | 33 +++
 src/admin/libvirt_admin_public.syms |  5 
 tools/virt-admin.c  | 41 +
 7 files changed, 128 insertions(+), 1 deletion(-)

diff --git a/docs/manpages/virt-admin.rst b/docs/manpages/virt-admin.rst
index 54a6512ef7..82e9594ba6 100644
--- a/docs/manpages/virt-admin.rst
+++ b/docs/manpages/virt-admin.rst
@@ -326,6 +326,19 @@ Sets the daemon timeout to the value of '--timeout' 
argument. Use ``--timeout 0`
 to disable auto-shutdown of the daemon.
 
 
+daemon-shutdown
+---
+
+**Syntax:**
+
+::
+
+   daemon-shutdown [--preserve]
+
+Instruct the daemon to exit gracefully. If the ``--preserve`` flag is given,
+it will save state in the same manner that would be done on a host OS shutdown
+(privileged daemons) or a login session quit (unprivileged daemons).
+
 SERVER COMMANDS
 ===
 
diff --git a/include/libvirt/libvirt-admin.h b/include/libvirt/libvirt-admin.h
index ae4703f89b..d5c0a5bc8f 100644
--- a/include/libvirt/libvirt-admin.h
+++ b/include/libvirt/libvirt-admin.h
@@ -484,6 +484,19 @@ int virAdmConnectSetDaemonTimeout(virAdmConnectPtr conn,
   unsigned int timeout,
   unsigned int flags);
 
+/**
+ * virAdmConnectDaemonShutdownFlags:
+ *
+ * Since: 11.2.0
+ */
+typedef enum {
+/* Preserve state before shutting down daemon (Since: 11.2.0) */
+VIR_DAEMON_SHUTDOWN_PRESERVE = (1 << 0),
+} virAdmConnectDaemonShutdownFlags;
+
+int virAdmConnectDaemonShutdown(virAdmConnectPtr conn,
+unsigned int flags);
+
 # ifdef __cplusplus
 }
 # endif
diff --git a/src/admin/admin_protocol.x b/src/admin/admin_protocol.x
index f3130efd2d..cf5707e62e 100644
--- a/src/admin/admin_protocol.x
+++ b/src/admin/admin_protocol.x
@@ -219,6 +219,10 @@ struct admin_connect_set_daemon_timeout_args {
 unsigned int flags;
 };
 
+struct admin_connect_daemon_shutdown_args {
+unsigned int flags;
+};
+
 /* Define the program number, protocol version and procedure numbers here. */
 const ADMIN_PROGRAM = 0x06900690;
 const ADMIN_PROTOCOL_VERSION = 1;
@@ -334,5 +338,10 @@ enum admin_procedure {
 /**
  * @generate: both
  */
-ADMIN_PROC_CONNECT_SET_DAEMON_TIMEOUT = 19
+ADMIN_PROC_CONNECT_SET_DAEMON_TIMEOUT = 19,
+
+/**
+ * @generate: both
+ */
+ADMIN_PROC_CONNECT_DAEMON_SHUTDOWN = 20
 };
diff --git a/src/admin/admin_server_dispatch.c 
b/src/admin/admin_server_dispatch.c
index ae8a8d4fa6..0eee80f6ac 100644
--- a/src/admin/admin_server_dispatch.c
+++ b/src/admin/admin_server_dispatch.c
@@ -474,6 +474,19 @@ adminConnectSetDaemonTimeout(virNetDaemon *dmn,
 return virNetDaemonAutoShutdown(dmn, timeout);
 }
 
+static int
+adminConnectDaemonShutdown(virNetDaemon *dmn,
+   unsigned int flags)
+{
+virCheckFlags(VIR_DAEMON_SHUTDOWN_PRESERVE, -1);
+
+if (flags & VIR_DAEMON_SHUTDOWN_PRESERVE)
+virNetDaemonStop(dmn);
+
+virNetDaemonQuit(dmn);
+
+return 0;
+}
 
 static int
 adminDispatchConnectGetLoggingOutputs(virNetServer *server G_GNUC_UNUSED,
diff --git a/src/admin/libvirt-admin.c b/src/admin/libvirt-admin.c
index 3c756eb376..d7efac025f 100644
--- a/src/admin/libvirt-admin.c
+++ b/src/admin/libvirt-admin.c
@@ -1357,3 +1357,36 @@ virAdmConnectSetDaemonTimeout(virAdmConnectPtr conn,
 
 return ret;
 }
+
+
+/**
+ * virAdmConnectDaemonShutdown:
+ * @conn: pointer to an active admin connection
+ * @flags: optional extra falgs
+ *
+ * Trigger shutdown of the daemon, if @flags includes
+ * VIR_DAEMON_SHUTDOWN_PRESERVE then state will be
+ * preserved before shutting down
+ *
+ * Returns 0 on success, -1 on error.
+ *
+ * Since: 11.2.0
+ */
+int
+virAdmConnectDaemonShutdown(virAdmConnectPtr conn,
+unsigned int flags)
+{
+int ret;
+
+VIR_DEBUG("conn=%p, flags=0x%x", conn, flags);
+
+virResetLastError();
+virCheckAdmConnectReturn(conn, -1);
+
+if ((ret = remoteAdminConnectDaemonShutdown(conn, flags)) < 0) {
+virDis

[PATCH v2 02/22] remote: always invoke virStateStop for all daemons

2025-03-12 Thread Daniel P . Berrangé
Currently the virStateStop method is only wired up to run save for
the unprivileged daemons, so there is no functional change.

IOW, session exit, or host OS shutdown will trigger VM managed saved
for QEMU session daemon, but not the system daemon.

This changes the daemon code to always run virStateStop for all
daemons. Instead the QEMU driver is responsible for skipping its
own logic when running privileged...for now.

This means that virStateStop will now be triggered by logind's
PrepareForShutdown signal.

Reviewed-by: Peter Krempa 
Signed-off-by: Daniel P. Berrangé 
---
 src/qemu/qemu_driver.c |  3 ++-
 src/remote/remote_daemon.c | 34 ++
 2 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 14b128d268..7e6ce5f69b 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -949,7 +949,8 @@ qemuStateStop(void)
 .uri = cfg->uri,
 };
 
-virDomainDriverAutoShutdown(&ascfg);
+if (!qemu_driver->privileged)
+virDomainDriverAutoShutdown(&ascfg);
 
 return 0;
 }
diff --git a/src/remote/remote_daemon.c b/src/remote/remote_daemon.c
index d90355c0d2..2f6cef1828 100644
--- a/src/remote/remote_daemon.c
+++ b/src/remote/remote_daemon.c
@@ -628,30 +628,32 @@ static void daemonRunStateInit(void *opaque)
  virStateShutdownPrepare,
  virStateShutdownWait);
 
-/* Tie the non-privileged daemons to the session/shutdown lifecycle */
+/* Signal for VM shutdown when desktop session is terminated, in
+ * unprivileged daemons */
 if (!virNetDaemonIsPrivileged(dmn)) {
-
 if (virGDBusHasSessionBus()) {
 sessionBus = virGDBusGetSessionBus();
 if (sessionBus != NULL)
 g_dbus_connection_add_filter(sessionBus,
  handleSessionMessageFunc, dmn, 
NULL);
 }
+}
 
-if (virGDBusHasSystemBus()) {
-systemBus = virGDBusGetSystemBus();
-if (systemBus != NULL)
-g_dbus_connection_signal_subscribe(systemBus,
-   "org.freedesktop.login1",
-   
"org.freedesktop.login1.Manager",
-   "PrepareForShutdown",
-   NULL,
-   NULL,
-   G_DBUS_SIGNAL_FLAGS_NONE,
-   handleSystemMessageFunc,
-   dmn,
-   NULL);
-}
+if (virGDBusHasSystemBus()) {
+/* Signal for VM shutdown when host OS shutdown is requested, in
+ * both privileged and unprivileged daemons */
+systemBus = virGDBusGetSystemBus();
+if (systemBus != NULL)
+g_dbus_connection_signal_subscribe(systemBus,
+   "org.freedesktop.login1",
+   
"org.freedesktop.login1.Manager",
+   "PrepareForShutdown",
+   NULL,
+   NULL,
+   G_DBUS_SIGNAL_FLAGS_NONE,
+   handleSystemMessageFunc,
+   dmn,
+   NULL);
 }
 
 /* Only now accept clients from network */
-- 
2.48.1


[PATCH 00/17] qemu: Clean up various unneeded capabilities

2025-03-12 Thread Peter Krempa
The recent bump of minimum qemu version resulted into some capabilities
always being present. I've noticed one while doing a review and from
there I removed a few others, mostly storage related ones as well.

Peter Krempa (17):
  qemuBuildCompatDeprecatedCommandLine: Assume that
QEMU_CAPS_COMPAT_DEPRECATED is supported
  qemu: capabilities: Retire QEMU_CAPS_COMPAT_DEPRECATED
  qemuBuildObjectCommandlineFromJSON: Assume all qemus support
QEMU_CAPS_OBJECT_JSON
  qemu: monitor: Drop support for extra wrapper for 'object_add'
  util: Drop 'virQEMUBuildCommandLineJSONArrayBitmap'
  qemu: capabilities: Retire QEMU_CAPS_OBJECT_JSON
  qemu: monitor: Always assume support for
QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT
  qemu: capabilities: Retire QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT
  qemu: Always assume supprot for QEMU_CAPS_BLOCKDEV_REOPEN
  qemu: capabilities: Retire QEMU_CAPS_BLOCKDEV_REOPEN
  qemu: Always assume support for
QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY
  qemu: capabilities: Retire
QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY
  qemu: Always assume support for QEMU_CAPS_INCREMENTAL_BACKUP
  qemu: capabilites: Retire QEMU_CAPS_INCREMENTAL_BACKUP
  qemu: domain: Remove qemuDomainSupportsCheckpointsBlockjobs
  qemu: migration: Always assume support for
QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING
  qemu: capabilites: Retire
QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING

 src/libvirt_private.syms  |  1 -
 src/qemu/qemu_backup.c| 18 -
 src/qemu/qemu_block.c |  4 --
 src/qemu/qemu_blockjob.c  | 20 ++
 src/qemu/qemu_capabilities.c  | 33 +
 src/qemu/qemu_capabilities.h  | 14 +++
 src/qemu/qemu_checkpoint.c| 18 +
 src/qemu/qemu_command.c   | 31 +++-
 src/qemu/qemu_domain.c| 23 
 src/qemu/qemu_domain.h|  4 --
 src/qemu/qemu_driver.c| 37 +--
 src/qemu/qemu_migration.c | 10 +
 src/qemu/qemu_monitor.c   | 28 +-
 src/qemu/qemu_monitor_json.c  |  2 +-
 src/qemu/qemu_monitor_priv.h  |  4 --
 src/qemu/qemu_snapshot.c  |  4 --
 src/util/virqemu.c| 36 --
 src/util/virqemu.h|  3 --
 .../caps_10.0.0_s390x.xml |  7 
 .../caps_10.0.0_x86_64+amdsev.xml |  7 
 .../caps_10.0.0_x86_64.xml|  7 
 .../qemucapabilitiesdata/caps_6.2.0_ppc64.xml |  7 
 .../caps_6.2.0_x86_64.xml |  7 
 .../qemucapabilitiesdata/caps_7.0.0_ppc64.xml |  7 
 .../caps_7.0.0_x86_64.xml |  7 
 .../qemucapabilitiesdata/caps_7.1.0_ppc64.xml |  7 
 .../caps_7.1.0_x86_64.xml |  7 
 tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml |  7 
 .../caps_7.2.0_x86_64+hvf.xml |  7 
 .../caps_7.2.0_x86_64.xml |  7 
 .../caps_8.0.0_x86_64.xml |  7 
 .../qemucapabilitiesdata/caps_8.1.0_s390x.xml |  7 
 .../caps_8.1.0_x86_64.xml |  7 
 .../caps_8.2.0_aarch64.xml|  7 
 .../caps_8.2.0_armv7l.xml |  7 
 .../caps_8.2.0_loongarch64.xml|  7 
 .../qemucapabilitiesdata/caps_8.2.0_s390x.xml |  7 
 .../caps_8.2.0_x86_64.xml |  7 
 .../qemucapabilitiesdata/caps_9.0.0_sparc.xml |  7 
 .../caps_9.0.0_x86_64.xml |  7 
 .../caps_9.1.0_riscv64.xml|  7 
 .../qemucapabilitiesdata/caps_9.1.0_s390x.xml |  7 
 .../caps_9.1.0_x86_64.xml |  7 
 .../caps_9.2.0_aarch64+hvf.xml|  7 
 .../qemucapabilitiesdata/caps_9.2.0_s390x.xml |  7 
 .../caps_9.2.0_x86_64+amdsev.xml  |  7 
 .../caps_9.2.0_x86_64.xml |  7 
 tests/qemucommandutiltest.c   | 30 ---
 48 files changed, 40 insertions(+), 483 deletions(-)

-- 
2.48.1


[PATCH 01/17] qemuBuildCompatDeprecatedCommandLine: Assume that QEMU_CAPS_COMPAT_DEPRECATED is supported

2025-03-12 Thread Peter Krempa
Bumping minimum version of qemu to 6.2 means that the '-compat' option
is now always supported.

As we were unable to detect it in any other way we based this capability
on QEMU_CAPS_OBJECT_JSON.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_command.c | 12 ++--
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 07cdadfd73..2b0e3dd53a 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -10339,8 +10339,7 @@ VIR_ENUM_IMPL(qemuCommandDeprecationBehavior,
 static void
 qemuBuildCompatDeprecatedCommandLine(virCommand *cmd,
  virQEMUDriverConfig *cfg,
- virDomainDef *def,
- virQEMUCaps *qemuCaps)
+ virDomainDef *def)
 {
 g_autoptr(virJSONValue) props = NULL;
 g_autofree char *propsstr = NULL;
@@ -10365,13 +10364,6 @@ qemuBuildCompatDeprecatedCommandLine(virCommand *cmd,
 if (behavior == QEMU_COMMAND_DEPRECATION_BEHAVIOR_NONE)
 return;

-/* we don't try to enable this feature at all if qemu doesn't support it,
- * so that a downgrade of qemu version doesn't impact startup of the VM */
-if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_COMPAT_DEPRECATED)) {
-VIR_DEBUG("-compat not supported for VM '%s'", def->name);
-return;
-}
-
 switch (behavior) {
 case QEMU_COMMAND_DEPRECATION_BEHAVIOR_OMIT:
 case QEMU_COMMAND_DEPRECATION_BEHAVIOR_NONE:
@@ -10455,7 +10447,7 @@ qemuBuildCommandLine(virDomainObj *vm,
 if (qemuBuildNameCommandLine(cmd, cfg, def) < 0)
 return NULL;

-qemuBuildCompatDeprecatedCommandLine(cmd, cfg, def, qemuCaps);
+qemuBuildCompatDeprecatedCommandLine(cmd, cfg, def);

 virCommandAddArg(cmd, "-S"); /* freeze CPUs during startup */

-- 
2.48.1


[PATCH 05/17] util: Drop 'virQEMUBuildCommandLineJSONArrayBitmap'

2025-03-12 Thread Peter Krempa
It was used to convert JSON arrays in legacy -object commandline
conversion. Since we now exclusively use JSON with -object, this
infrastructure is no longer needed.

Signed-off-by: Peter Krempa 
---
 src/libvirt_private.syms|  1 -
 src/util/virqemu.c  | 36 
 src/util/virqemu.h  |  3 ---
 tests/qemucommandutiltest.c | 30 --
 4 files changed, 70 deletions(-)

diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index e63939e2b5..e78abdad15 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -3279,7 +3279,6 @@ virProcessWait;
 # util/virqemu.h
 virQEMUBuildBufferEscapeComma;
 virQEMUBuildCommandLineJSON;
-virQEMUBuildCommandLineJSONArrayBitmap;
 virQEMUBuildCommandLineJSONArrayNumbered;
 virQEMUBuildCommandLineJSONArrayObjectsStr;

diff --git a/src/util/virqemu.c b/src/util/virqemu.c
index c9fac5956a..d2a69026c8 100644
--- a/src/util/virqemu.c
+++ b/src/util/virqemu.c
@@ -50,42 +50,6 @@ virQEMUBuildCommandLineJSONRecurse(const char *key,



-int
-virQEMUBuildCommandLineJSONArrayBitmap(const char *key,
-   virJSONValue *array,
-   virBuffer *buf)
-{
-ssize_t pos = -1;
-ssize_t end;
-g_autoptr(virBitmap) bitmap = virBitmapNew(0);
-size_t i;
-
-for (i = 0; i < virJSONValueArraySize(array); i++) {
-virJSONValue *member = virJSONValueArrayGet(array, i);
-unsigned long long value;
-
-if (virJSONValueGetNumberUlong(member, &value) < 0)
-return -1;
-
-virBitmapSetBitExpand(bitmap, value);
-}
-
-while ((pos = virBitmapNextSetBit(bitmap, pos)) > -1) {
-if ((end = virBitmapNextClearBit(bitmap, pos)) < 0)
-end = virBitmapLastSetBit(bitmap) + 1;
-
-if (end - 1 > pos) {
-virBufferAsprintf(buf, "%s=%zd-%zd,", key, pos, end - 1);
-pos = end;
-} else {
-virBufferAsprintf(buf, "%s=%zd,", key, pos);
-}
-}
-
-return 0;
-}
-
-
 int
 virQEMUBuildCommandLineJSONArrayNumbered(const char *key,
  virJSONValue *array,
diff --git a/src/util/virqemu.h b/src/util/virqemu.h
index be083d7545..e5d36b95c4 100644
--- a/src/util/virqemu.h
+++ b/src/util/virqemu.h
@@ -32,9 +32,6 @@ typedef int 
(*virQEMUBuildCommandLineJSONArrayFormatFunc)(const char *key,
 int virQEMUBuildCommandLineJSONArrayObjectsStr(const char *key,
virJSONValue *array,
virBuffer *buf);
-int virQEMUBuildCommandLineJSONArrayBitmap(const char *key,
-   virJSONValue *array,
-   virBuffer *buf);
 int virQEMUBuildCommandLineJSONArrayNumbered(const char *key,
  virJSONValue *array,
  virBuffer *buf);
diff --git a/tests/qemucommandutiltest.c b/tests/qemucommandutiltest.c
index f92fb8f177..58493c703f 100644
--- a/tests/qemucommandutiltest.c
+++ b/tests/qemucommandutiltest.c
@@ -83,39 +83,9 @@ mymain(void)
 ret = -1; \
  } while (0)

-#define DO_TEST_COMMAND_OBJECT_FROM_JSON(PROPS, EXPECT) \
-DO_TEST_COMMAND_FROM_JSON(PROPS, virQEMUBuildCommandLineJSONArrayBitmap, 
EXPECT)
-
 #define DO_TEST_COMMAND_DRIVE_FROM_JSON(PROPS, EXPECT) \
 DO_TEST_COMMAND_FROM_JSON(PROPS, virQEMUBuildCommandLineJSONArrayNumbered, 
EXPECT)

-DO_TEST_COMMAND_OBJECT_FROM_JSON("{}", NULL);
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"string\":\"qwer\"}", "string=qwer");
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"string\":\"qw,e,r\"}", 
"string=qw,,e,,r");
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"number\":1234}", "number=1234");
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"boolean\":true}", "boolean=on");
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"boolean\":false}", "boolean=off");
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"bitmap\":[]}", NULL);
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"bitmap\":[0]}", "bitmap=0");
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"bitmap\":[1,3,5]}",
- "bitmap=1,bitmap=3,bitmap=5");
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"bitmap\":[0,1,2,3]}", "bitmap=0-3");
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"bitmap\":[1,2,3,5]}",
- "bitmap=1-3,bitmap=5");
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"bitmap\":[1,2,3,5,7,8,9]}",
- "bitmap=1-3,bitmap=5,bitmap=7-9");
-DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"array\":[\"bleah\",\"qwerty\",1]}",
- "array=bleah,array=qwerty,array=1");
-
DO_TEST_COMMAND_OBJECT_FROM_JSON("{\"boolean\":true,\"hyphen-name\":1234,\"some_string\":\"bleah\"}",
- 
"boolean=on,hyphen-name=1234,some_string=bleah");
-DO_TEST_COMMAND_OBJ

[PATCH 07/17] qemu: monitor: Always assume support for QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT

2025-03-12 Thread Peter Krempa
The flat mode of 'query-named-block-nodes' is supported since qemu-5.0.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_monitor.c  | 1 -
 src/qemu/qemu_monitor_json.c | 2 +-
 src/qemu/qemu_monitor_priv.h | 2 --
 3 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 724e82e8a4..8d8e73d38d 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -600,7 +600,6 @@ qemuMonitorOpenInternal(virDomainObj *vm,
 mon->cb = cb;

 if (priv) {
-mon->queryNamedBlockNodesFlat = virQEMUCapsGet(priv->qemuCaps, 
QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT);
 mon->blockjobMaskProtocol = virQEMUCapsGet(priv->qemuCaps, 
QEMU_CAPS_BLOCKJOB_BACKING_MASK_PROTOCOL);
 }

diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 74420b2ee7..3caeb39a1b 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -2085,7 +2085,7 @@ qemuMonitorJSONQueryNamedBlockNodes(qemuMonitor *mon)
 g_autoptr(virJSONValue) reply = NULL;

 if (!(cmd = qemuMonitorJSONMakeCommand("query-named-block-nodes",
-   "B:flat", 
mon->queryNamedBlockNodesFlat,
+   "b:flat", true,
NULL)))
 return NULL;

diff --git a/src/qemu/qemu_monitor_priv.h b/src/qemu/qemu_monitor_priv.h
index 8cb5e2c3a4..60a3cedb64 100644
--- a/src/qemu/qemu_monitor_priv.h
+++ b/src/qemu/qemu_monitor_priv.h
@@ -88,8 +88,6 @@ struct _qemuMonitor {
 void *logOpaque;
 virFreeCallback logDestroy;

-/* query-named-block-nodes supports the 'flat' option */
-bool queryNamedBlockNodesFlat;
 /* use the backing-mask-protocol flag of block-commit/stream */
 bool blockjobMaskProtocol;
 };
-- 
2.48.1


[PATCH 15/17] qemu: domain: Remove qemuDomainSupportsCheckpointsBlockjobs

2025-03-12 Thread Peter Krempa
The function now serves no real purpose.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_block.c|  3 ---
 src/qemu/qemu_domain.c   | 14 --
 src/qemu/qemu_domain.h   |  4 
 src/qemu/qemu_driver.c   |  6 --
 src/qemu/qemu_snapshot.c |  4 
 5 files changed, 31 deletions(-)

diff --git a/src/qemu/qemu_block.c b/src/qemu/qemu_block.c
index 76e7f4ca83..2468725bf7 100644
--- a/src/qemu/qemu_block.c
+++ b/src/qemu/qemu_block.c
@@ -3592,9 +3592,6 @@ qemuBlockCommit(virDomainObj *vm,
 if (qemuDomainDiskBlockJobIsActive(disk))
 return NULL;

-if (qemuDomainSupportsCheckpointsBlockjobs(vm) < 0)
-return NULL;
-
 if (topSource == disk->src) {
 /* XXX Should we auto-pivot when COMMIT_ACTIVE is not specified? */
 if (!(flags & VIR_DOMAIN_BLOCK_COMMIT_ACTIVE)) {
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index ea29765c61..29fac0034e 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -10169,20 +10169,6 @@ qemuDomainDefHasManagedPR(virDomainObj *vm)
 }


-/**
- * qemuDomainSupportsCheckpointsBlockjobs:
- * @vm: domain object
- *
- * Checks whether a block job is supported in possible combination with
- * checkpoints (qcow2 bitmaps). Returns -1 if unsupported and reports an error
- * 0 in case everything is supported.
- */
-int
-qemuDomainSupportsCheckpointsBlockjobs(virDomainObj *vm G_GNUC_UNUSED)
-{
-return 0;
-}
-
 /**
  * qemuDomainInitializePflashStorageSource:
  *
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 04577f1297..8e53a270a7 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -1053,10 +1053,6 @@ int
 qemuDomainValidateActualNetDef(const virDomainNetDef *net,
virQEMUCaps *qemuCaps);

-int
-qemuDomainSupportsCheckpointsBlockjobs(virDomainObj *vm)
-G_GNUC_WARN_UNUSED_RESULT;
-
 int
 qemuDomainMakeCPUMigratable(virArch arch,
 virCPUDef *cpu,
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index f974d2fba8..af5445f78d 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -13624,9 +13624,6 @@ qemuDomainBlockPullCommon(virDomainObj *vm,
 if (virDomainObjCheckActive(vm) < 0)
 goto endjob;

-if (qemuDomainSupportsCheckpointsBlockjobs(vm) < 0)
-goto endjob;
-
 if (!(disk = qemuDomainDiskByName(vm->def, path)))
 goto endjob;

@@ -14150,9 +14147,6 @@ qemuDomainBlockCopyCommon(virDomainObj *vm,
 if (virDomainObjCheckActive(vm) < 0)
 goto endjob;

-if (qemuDomainSupportsCheckpointsBlockjobs(vm) < 0)
-goto endjob;
-
 if (!(disk = qemuDomainDiskByName(vm->def, path)))
 goto endjob;

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 9556bd1216..9c2ab47c51 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -1082,7 +1082,6 @@ qemuSnapshotPrepare(virDomainObj *vm,
 }

 /* Handle interlocking with 'checkpoints':
- * - if the VM is online use qemuDomainSupportsCheckpointsBlockjobs
  * - if the VM is offline disallow external snapshots as the support for
  *   propagating bitmaps into the would-be-created overlay is not yet 
implemented
  */
@@ -1093,9 +1092,6 @@ qemuSnapshotPrepare(virDomainObj *vm,
_("support for offline external snapshots while 
checkpoint exists was not yet implemented"));
 return -1;
 }
-} else {
-if (qemuDomainSupportsCheckpointsBlockjobs(vm) < 0)
-return -1;
 }

 /* Alter flags to let later users know what we learned.  */
-- 
2.48.1


[PATCH 11/17] qemu: Always assume support for QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY

2025-03-12 Thread Peter Krempa
qemu supports the @allow-write-only-overlay feature since qemu-5.0.
Remove the alternate code paths.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_block.c|  1 -
 src/qemu/qemu_blockjob.c |  9 +++--
 src/qemu/qemu_driver.c   | 20 ++--
 3 files changed, 9 insertions(+), 21 deletions(-)

diff --git a/src/qemu/qemu_block.c b/src/qemu/qemu_block.c
index eb0621463e..76e7f4ca83 100644
--- a/src/qemu/qemu_block.c
+++ b/src/qemu/qemu_block.c
@@ -3826,7 +3826,6 @@ qemuBlockPivot(virDomainObj *vm,
  * to copy data into the backing chain while the top image is being
  * copied shallow */
 if (reuse && shallow &&
-virQEMUCapsGet(priv->qemuCaps, 
QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY) &&
 virStorageSourceHasBacking(disk->mirror)) {

 if (qemuProcessPrepareHostStorageSourceChain(vm, 
disk->mirror->backingStore) < 0)
diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c
index be18f7b273..7b7d54fdca 100644
--- a/src/qemu/qemu_blockjob.c
+++ b/src/qemu/qemu_blockjob.c
@@ -1268,7 +1268,6 @@ qemuBlockJobProcessEventConcludedCopyAbort(virQEMUDriver 
*driver,
qemuBlockJobData *job,
virDomainAsyncJob asyncJob)
 {
-qemuDomainObjPrivate *priv = vm->privateData;
 g_autoptr(virStorageSource) mirror = NULL;

 VIR_DEBUG("copy job '%s' on VM '%s' aborted", job->name, vm->def->name);
@@ -1283,12 +1282,10 @@ 
qemuBlockJobProcessEventConcludedCopyAbort(virQEMUDriver *driver,
 bool reuse = job->jobflags & VIR_DOMAIN_BLOCK_COPY_REUSE_EXT;

 /* In the special case of a shallow copy with reused image we don't
- * hotplug the full chain when 
QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY
- * is supported. Attempting to delete it would thus result in spurious
- * errors as we'd attempt to blockdev-del images which were not added
- * yet */
+ * hotplug the full chain. Attempting to delete it would thus result in
+ * spurious errors as we'd attempt to blockdev-del images which were
+ * not added yet */
 if (reuse && shallow &&
-virQEMUCapsGet(priv->qemuCaps, 
QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY) &&
 virStorageSourceHasBacking(job->disk->mirror))
 g_clear_pointer(&job->disk->mirror->backingStore, virObjectUnref);
 }
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index ef731cb072..f974d2fba8 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -14299,22 +14299,14 @@ qemuDomainBlockCopyCommon(virDomainObj *vm,
  * level is being copied. To restore this semantics if
  * blockdev-reopen is supported defer opening of the backing chain
  * of 'mirror' to the pivot step */
-if (virQEMUCapsGet(priv->qemuCaps, 
QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY)) {
-g_autoptr(virStorageSource) terminator = virStorageSourceNew();
+g_autoptr(virStorageSource) terminator = virStorageSourceNew();

-if (qemuProcessPrepareHostStorageSource(vm, mirror) < 0)
-goto endjob;
-
-if (!(data = 
qemuBuildStorageSourceChainAttachPrepareBlockdevTop(mirror,
- 
terminator)))
-goto endjob;
-} else {
-if (qemuProcessPrepareHostStorageSourceChain(vm, mirror) < 0)
-goto endjob;
+if (qemuProcessPrepareHostStorageSource(vm, mirror) < 0)
+goto endjob;

-if (!(data = 
qemuBuildStorageSourceChainAttachPrepareBlockdev(mirror)))
-goto endjob;
-}
+if (!(data = 
qemuBuildStorageSourceChainAttachPrepareBlockdevTop(mirror,
+ 
terminator)))
+goto endjob;
 } else {
 if (!(blockNamedNodeData = qemuBlockGetNamedNodeData(vm, 
VIR_ASYNC_JOB_NONE)))
 goto endjob;
-- 
2.48.1


[PATCH 13/17] qemu: Always assume support for QEMU_CAPS_INCREMENTAL_BACKUP

2025-03-12 Thread Peter Krempa
The support for incremental backup (not the backup api itself) was gated
on support for migrating bitmaps. As the ability to migrate bitmaps was
added in qemu-6.0 we can now assume that all supported qemu versions
support incremental backup.

Remove the interlocking.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_backup.c   | 18 --
 src/qemu/qemu_capabilities.c |  2 +-
 src/qemu/qemu_checkpoint.c   | 12 
 src/qemu/qemu_domain.c   | 11 +--
 4 files changed, 2 insertions(+), 41 deletions(-)

diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c
index f64639d501..43576d135b 100644
--- a/src/qemu/qemu_backup.c
+++ b/src/qemu/qemu_backup.c
@@ -806,24 +806,6 @@ qemuBackupBegin(virDomainObj *vm,
 if (virDomainBackupAlignDisks(def, vm->def, suffix) < 0)
 goto endjob;

-if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP)) {
-size_t i;
-
-if (chkdef) {
-virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
-   _("creating checkpoint for incremental backup is 
not supported yet"));
-goto endjob;
-}
-
-for (i = 0; i < def->ndisks; i++) {
-if (def->disks[i].backupmode == 
VIR_DOMAIN_BACKUP_DISK_BACKUP_MODE_INCREMENTAL) {
-virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
-   _("incremental backup is not supported yet"));
-goto endjob;
-}
-}
-}
-
 if (priv->backup) {
 virReportError(VIR_ERR_OPERATION_INVALID, "%s",
_("another backup job is already running"));
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 615efbf021..117648f03f 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -6392,7 +6392,7 @@ static const struct 
virQEMUCapsDomainFeatureCapabilityTuple domCapsTuples[] = {
 { VIR_DOMAIN_CAPS_FEATURE_VMCOREINFO, QEMU_CAPS_DEVICE_VMCOREINFO },
 { VIR_DOMAIN_CAPS_FEATURE_GENID, QEMU_CAPS_DEVICE_VMGENID },
 { VIR_DOMAIN_CAPS_FEATURE_BACKING_STORE_INPUT, QEMU_CAPS_LAST },
-{ VIR_DOMAIN_CAPS_FEATURE_BACKUP, QEMU_CAPS_INCREMENTAL_BACKUP },
+{ VIR_DOMAIN_CAPS_FEATURE_BACKUP, QEMU_CAPS_LAST },
 { VIR_DOMAIN_CAPS_FEATURE_ASYNC_TEARDOWN, 
QEMU_CAPS_RUN_WITH_ASYNC_TEARDOWN },
 };

diff --git a/src/qemu/qemu_checkpoint.c b/src/qemu/qemu_checkpoint.c
index b05aaa246e..cf44e45aa1 100644
--- a/src/qemu/qemu_checkpoint.c
+++ b/src/qemu/qemu_checkpoint.c
@@ -588,12 +588,6 @@ qemuCheckpointCreateXML(virDomainPtr domain,
_("cannot create checkpoint for inactive domain"));
 return NULL;
 }
-
-if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP)) {
-virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
-   _("incremental backup is not supported yet"));
-return NULL;
-}
 }

 if (!(def = virDomainCheckpointDefParseString(xmlDesc, driver->xmlopt,
@@ -855,12 +849,6 @@ qemuCheckpointDelete(virDomainObj *vm,
_("cannot delete checkpoint for inactive domain"));
 goto endjob;
 }
-
-if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP)) {
-virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
-   _("incremental backup is not supported yet"));
-goto endjob;
-}
 }

 if (!(chk = qemuCheckpointObjFromCheckpoint(vm, checkpoint)))
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 8be2181156..ea29765c61 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -10178,17 +10178,8 @@ qemuDomainDefHasManagedPR(virDomainObj *vm)
  * 0 in case everything is supported.
  */
 int
-qemuDomainSupportsCheckpointsBlockjobs(virDomainObj *vm)
+qemuDomainSupportsCheckpointsBlockjobs(virDomainObj *vm G_GNUC_UNUSED)
 {
-qemuDomainObjPrivate *priv = vm->privateData;
-
-if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP) &&
-virDomainListCheckpoints(vm->checkpoints, NULL, NULL, NULL, 0) > 0) {
-virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
-   _("cannot perform block operations while checkpoint 
exists"));
-return -1;
-}
-
 return 0;
 }

-- 
2.48.1


Re: [PATCH v2 20/21] qemu: add RDP support

2025-03-12 Thread Marc-André Lureau
Hi

On Fri, Mar 7, 2025 at 7:03 PM Martin Kletzander  wrote:
>
> On Tue, Feb 18, 2025 at 02:16:25PM +0400, marcandre.lur...@redhat.com wrote:
> >From: Marc-André Lureau 
> >
> >Wire the external server RDP support with QEMU.
> >
> >Check the configuration, allocate a port, start the process
> >and set the credentials.
> >
> >Signed-off-by: Marc-André Lureau 
> >---
> > docs/formatdomain.rst |  25 --
> > src/conf/domain_conf.h|   1 +
> > src/qemu/qemu_extdevice.c |  46 +--
> > src/qemu/qemu_hotplug.c   |  49 ++-
> > src/qemu/qemu_hotplug.h   |   1 +
> > src/qemu/qemu_process.c   | 167 ++
> > 6 files changed, 257 insertions(+), 32 deletions(-)
> >
> >diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c
> >index 28ca321c5c..5dae70cf7f 100644
> >--- a/src/qemu/qemu_hotplug.c
> >+++ b/src/qemu/qemu_hotplug.c
> >@@ -4423,6 +4423,7 @@ int
> > qemuDomainChangeGraphicsPasswords(virDomainObj *vm,
> >   int type,
> >   virDomainGraphicsAuthDef *auth,
> >+  const char *defaultUsername,
> >   const char *defaultPasswd,
> >   int asyncJob)
> > {
> >@@ -4432,13 +4433,19 @@ qemuDomainChangeGraphicsPasswords(virDomainObj *vm,
> > g_autofree char *validTo = NULL;
> > const char *connected = NULL;
> > const char *password;
> >+const char *username;
> > int ret = -1;
> >
> > if (!auth->passwd && !defaultPasswd)
> > return 0;
> >
> >+username = auth->username ? auth->username : defaultUsername;
> > password = auth->passwd ? auth->passwd : defaultPasswd;
> >
> >+if (type == VIR_DOMAIN_GRAPHICS_TYPE_RDP) {
> >+return qemuRdpSetCredentials(vm, username, password, "");
> >+}
> >+
>
> It's not immediately visible that defaultPasswd must not be NULL if the
> graphics type is RDP.  The patch is semantically fine, but I worry about
> the future, would you mind adding a simple check here so that we do not
> end up calling g_variant_new() with NULL?

assert(password != NULL) ?

>
> >diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
> >index 30d6560d52..53572f7315 100644
> >--- a/src/qemu/qemu_process.c
> >+++ b/src/qemu/qemu_process.c
> >@@ -5988,6 +6077,42 @@ qemuProcessPrepareHostNetwork(virDomainObj *vm)
> > return 0;
> > }
> >
> >+#include "qemu_rdp.h"
> >+
>
> Any reason why you cannot put the include at the top with the other ones?

oops, that's a left over, thanks!

>
> >+static int
> >+qemuPrepareGraphicsRdp(virQEMUDriver *driver,
> >+   virDomainGraphicsDef *gfx)
> >+{
> >+g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
> >+g_autoptr(qemuRdp) rdp = NULL;
>
> No need for autoptr variable here, there is no place where this function
> might fail with this variable set to non-NULL;

yeah, I suppose the code evolved. It doesn't hurt though. ok

>
> >+
> >+if (!(rdp = qemuRdpNewForHelper(cfg->qemuRdpName)))
> >+return -1;
> >+
> >+QEMU_DOMAIN_GRAPHICS_PRIVATE(gfx)->rdp = g_steal_pointer(&rdp);
> >+
> >+return 0;
> >+}


Re: [PATCH v2 05/21] qemu: fall-through for unsupported graphics

2025-03-12 Thread Marc-André Lureau
Hi

On Fri, Mar 7, 2025 at 3:58 PM Martin Kletzander  wrote:
>
> On Tue, Feb 18, 2025 at 02:16:10PM +0400, marcandre.lur...@redhat.com wrote:
> >From: Marc-André Lureau 
> >
> >Without an error message, it can be tedious to figure out failure to
> >start, just fall-through the generic range error.
> >
> >Signed-off-by: Marc-André Lureau 
> >---
> > src/qemu/qemu_command.c | 1 -
> > 1 file changed, 1 deletion(-)
> >
> >diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
> >index 54130ac4f0..772e98fbb4 100644
> >--- a/src/qemu/qemu_command.c
> >+++ b/src/qemu/qemu_command.c
> >@@ -8448,7 +8448,6 @@ qemuBuildGraphicsCommandLine(virQEMUDriverConfig *cfg,
> > break;
> > case VIR_DOMAIN_GRAPHICS_TYPE_RDP:
> > case VIR_DOMAIN_GRAPHICS_TYPE_DESKTOP:
> >-return -1;
>
> You're right that we're missing an error message here, but
>
> > case VIR_DOMAIN_GRAPHICS_TYPE_LAST:
> > default:
> > virReportEnumRangeError(virDomainGraphicsType, graphics->type);
>
> This reports an error that corresponds to a completely invalid value
> provided in the internal code as an error in the code.  I would be fine
> with it if the validation function qemuValidateDomainDeviceDefGraphics()
> threw an error for anyone requesting such an unsupported configuration,
> but that does not.  We should either add it there or just add a proper
> error message here.  It is not possible to trigger now unless someone is
> adding a previously unsupported graphics type, but still, would be nice
> to lead by an example.

I'll drop this patch.


[PATCH 10/17] qemu: capabilities: Retire QEMU_CAPS_BLOCKDEV_REOPEN

2025-03-12 Thread Peter Krempa
'blockdev-reopen' is supported since qemu-6.1. Since we now don't have
any code using this capability we can retire it.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_capabilities.c | 6 ++
 src/qemu/qemu_capabilities.h | 2 +-
 tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml| 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml| 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_armv7l.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_loongarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_sparc.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_aarch64+hvf.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64+amdsev.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 1 -
 31 files changed, 3 insertions(+), 34 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index f0c48bb2be..82e7dc5ccc 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -566,7 +566,7 @@ VIR_ENUM_IMPL(virQEMUCaps,
   "vhost-user-fs", /* QEMU_CAPS_DEVICE_VHOST_USER_FS */
   "query-named-block-nodes.flat", /* 
X_QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT */
   "blockdev-snapshot.allow-write-only-overlay", /* 
QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY */
-  "blockdev-reopen", /* QEMU_CAPS_BLOCKDEV_REOPEN */
+  "blockdev-reopen", /* X_QEMU_CAPS_BLOCKDEV_REOPEN */
   "storage.werror", /* X_QEMU_CAPS_STORAGE_WERROR */

   /* 360 */
@@ -1241,7 +1241,6 @@ struct virQEMUCapsStringFlags virQEMUCapsCommands[] = {
 { "query-cpu-model-baseline", QEMU_CAPS_QUERY_CPU_MODEL_BASELINE },
 { "query-cpu-model-comparison", QEMU_CAPS_QUERY_CPU_MODEL_COMPARISON },
 { "block-export-add", QEMU_CAPS_BLOCK_EXPORT_ADD },
-{ "blockdev-reopen", QEMU_CAPS_BLOCKDEV_REOPEN },
 { "set-action", QEMU_CAPS_SET_ACTION },
 { "query-dirty-rate", QEMU_CAPS_QUERY_DIRTY_RATE },
 { "sev-inject-launch-secret", QEMU_CAPS_SEV_INJECT_LAUNCH_SECRET },
@@ -5594,8 +5593,7 @@ virQEMUCapsInitQMPVersionCaps(virQEMUCaps *qemuCaps 
G_GNUC_UNUSED)
 void
 virQEMUCapsInitProcessCapsInterlock(virQEMUCaps *qemuCaps)
 {
-if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_BLOCKDEV_REOPEN) &&
-virQEMUCapsGet(qemuCaps, 
QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING))
+if (virQEMUCapsGet(qemuCaps, 
QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING))
 virQEMUCapsSet(qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP);
 }

diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index 1fedfdaeb1..620289572e 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -545,7 +545,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for 
syntax-check */
 QEMU_CAPS_DEVICE_VHOST_USER_FS, /* -device vhost-user-fs */
 X_QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT, /* query-named-block-nodes 
supports the 'flat' option */
 QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY, /* blockdev-snapshot has the 
'allow-write-only-overlay' feature */
-QEMU_CAPS_BLOCKDEV_REOPEN, /* 'blockdev-reopen' qmp command is supported */
+X_QEMU_CAPS_BLOCKDEV_REOPEN, /* 'blockdev-reopen' qmp command is supported 
*/
 X_QEMU_CAPS_STORAGE_WERROR, /* virtio-blk,scsi-hd.werror */

 /* 360 */
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
index a35578fab4..3f34d4853c 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
@@ -68,7 +68,6 @@
   
  

Re: [PATCH v2 15/21] qemu/dbus: log daemon stdout/err, use domainLogContext

2025-03-12 Thread Marc-André Lureau
Hi

On Fri, Mar 7, 2025 at 6:11 PM Martin Kletzander  wrote:
>
> On Tue, Feb 18, 2025 at 02:16:20PM +0400, marcandre.lur...@redhat.com wrote:
> >From: Marc-André Lureau 
> >
> >Currently, if dbus-daemon writes on errfd, it will SIGPIPE.
> >
> >Signed-off-by: Marc-André Lureau 
> >---
> > src/qemu/qemu_dbus.c | 34 +++---
> > 1 file changed, 23 insertions(+), 11 deletions(-)
> >
> >diff --git a/src/qemu/qemu_dbus.c b/src/qemu/qemu_dbus.c
> >index c9e99ea27b..a9e2fb0fe2 100644
> >--- a/src/qemu/qemu_dbus.c
> >+++ b/src/qemu/qemu_dbus.c
> >@@ -24,6 +24,7 @@
> > #include "virlog.h"
> > #include "virtime.h"
> > #include "virpidfile.h"
> >+#include "virutil.h"
> >
> > #define VIR_FROM_THIS VIR_FROM_NONE
> >
> >@@ -212,9 +213,12 @@ qemuDBusStart(virQEMUDriver *driver,
> > g_autofree char *pidfile = NULL;
> > g_autofree char *configfile = NULL;
> > g_autofree char *sockpath = NULL;
> >+g_autofree char *logpath = NULL;
> > virTimeBackOffVar timebackoff;
> > const unsigned long long timeout = 500 * 1000; /* ms */
> >-VIR_AUTOCLOSE errfd = -1;
> >+int logfd = -1;
> >+g_autoptr(domainLogContext) logContext = NULL;
> >+
> > pid_t cpid = -1;
> > int ret = -1;
> >
> >@@ -246,10 +250,21 @@ qemuDBusStart(virQEMUDriver *driver,
> > if (qemuSecurityDomainSetPathLabel(driver, vm, configfile, false) < 0)
> > goto cleanup;
> >
> >+if (!(logContext = domainLogContextNew(cfg->stdioLogD, 
> >cfg->dbusStateDir,
> >+   QEMU_DRIVER_NAME,
> >+   vm, driver->privileged,
> >+   shortName))) {
> >+virLastErrorPrefixMessage("%s", _("can't open log context"));
> >+goto cleanup;
> >+}
> >+
> >+logfd = domainLogContextGetWriteFD(logContext);
> >+
> > cmd = virCommandNew(dbusDaemonPath);
> > virCommandClearCaps(cmd);
> > virCommandSetPidFile(cmd, pidfile);
> >-virCommandSetErrorFD(cmd, &errfd);
> >+virCommandSetOutputFD(cmd, &logfd);
> >+virCommandSetErrorFD(cmd, &logfd);
> > virCommandDaemonize(cmd);
> > virCommandAddArgFormat(cmd, "--config-file=%s", configfile);
> >
> >@@ -266,7 +281,7 @@ qemuDBusStart(virQEMUDriver *driver,
> > if (virTimeBackOffStart(&timebackoff, 1, timeout) < 0)
> > goto cleanup;
> > while (virTimeBackOffWait(&timebackoff)) {
> >-char errbuf[1024] = { 0 };
> >+g_autofree char *msg = NULL;
> >
> > if (virFileExists(sockpath))
> > break;
> >@@ -274,15 +289,12 @@ qemuDBusStart(virQEMUDriver *driver,
> > if (virProcessKill(cpid, 0) == 0)
> > continue;
> >
> >-if (saferead(errfd, errbuf, sizeof(errbuf) - 1) < 0) {
> >-virReportSystemError(errno,
> >- _("dbus-daemon %1$s died unexpectedly"),
> >- dbusDaemonPath);
> >-} else {
> >-virReportError(VIR_ERR_OPERATION_FAILED,
> >-   _("dbus-daemon died and reported: %1$s"), 
> >errbuf);
> >-}
> >+if (domainLogContextReadFiltered(logContext, &msg, 1024) < 0)
> >+VIR_WARN("Unable to read from dbus-daemon log");
> >
> >+virReportError(VIR_ERR_OPERATION_FAILED,
> >+   _("dbus-daemon died and reported:\n%1$s"),
> >+   NULLSTR(msg));
>
> Are the errors usually multiline?  If not, then I'd skip the newline in
> the translation string.
>

I don't remember. ok


[PATCH v2 03/22] hypervisor: expand available shutdown actions

2025-03-12 Thread Daniel P . Berrangé
The auto shutdown code can currently only perform managed save,
which may fail in some cases, for example when PCI devices are
assigned. On failure, shutdown inhibitors remain in place which
may be undesirable.

This expands the logic to try a sequence of operations

 * Managed save
 * Graceful shutdown
 * Forced poweroff

Each of these operations can be enabled or disabled, but they
are always applied in this order.

With the shutdown option, a configurable time is allowed for
shutdown to complete, defaulting to 30 seconds, before moving
onto the forced poweroff phase.

Signed-off-by: Daniel P. Berrangé 
---
 src/hypervisor/domain_driver.c | 130 -
 src/hypervisor/domain_driver.h |   7 ++
 src/qemu/qemu_driver.c |   3 +
 3 files changed, 121 insertions(+), 19 deletions(-)

diff --git a/src/hypervisor/domain_driver.c b/src/hypervisor/domain_driver.c
index 0ca482275f..e625726c07 100644
--- a/src/hypervisor/domain_driver.c
+++ b/src/hypervisor/domain_driver.c
@@ -720,9 +720,25 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)
 g_autoptr(virConnect) conn = NULL;
 int numDomains = 0;
 size_t i;
-int state;
 virDomainPtr *domains = NULL;
-g_autofree unsigned int *flags = NULL;
+
+VIR_DEBUG("Run autoshutdown uri=%s trySave=%d tryShutdown=%d poweroff=%d 
waitShutdownSecs=%u",
+  cfg->uri, cfg->trySave, cfg->tryShutdown, cfg->poweroff,
+  cfg->waitShutdownSecs);
+
+/*
+ * Ideally guests will shutdown in a few seconds, but it would
+ * not be unsual for it to take a while longer, especially under
+ * load, or if the guest OS has inhibitors slowing down shutdown.
+ *
+ * If we wait too long, then guests which ignore the shutdown
+ * request will significantly delay host shutdown.
+ *
+ * Pick 30 seconds as a moderately safe default, assuming that
+ * most guests are well behaved.
+ */
+if (cfg->waitShutdownSecs <= 0)
+cfg->waitShutdownSecs = 30;
 
 if (!(conn = virConnectOpen(cfg->uri)))
 goto cleanup;
@@ -732,31 +748,107 @@ 
virDomainDriverAutoShutdown(virDomainDriverAutoShutdownConfig *cfg)

VIR_CONNECT_LIST_DOMAINS_ACTIVE)) < 0)
 goto cleanup;
 
-flags = g_new0(unsigned int, numDomains);
+VIR_DEBUG("Auto shutdown with %d running domains", numDomains);
+if (cfg->trySave) {
+g_autofree unsigned int *flags = g_new0(unsigned int, numDomains);
+for (i = 0; i < numDomains; i++) {
+int state;
+/*
+ * Pause all VMs to make them stop dirtying pages,
+ * so save is quicker. We remember if any VMs were
+ * paused so we can restore that on resume.
+ */
+flags[i] = VIR_DOMAIN_SAVE_RUNNING;
+if (virDomainGetState(domains[i], &state, NULL, 0) == 0) {
+if (state == VIR_DOMAIN_PAUSED)
+flags[i] = VIR_DOMAIN_SAVE_PAUSED;
+}
+if (flags[i] & VIR_DOMAIN_SAVE_RUNNING)
+virDomainSuspend(domains[i]);
+}
+
+for (i = 0; i < numDomains; i++) {
+if (virDomainManagedSave(domains[i], flags[i]) < 0) {
+VIR_WARN("Unable to perform managed save of '%s': %s",
+ virDomainGetName(domains[i]),
+ virGetLastErrorMessage());
+if (flags[i] & VIR_DOMAIN_SAVE_RUNNING)
+virDomainResume(domains[i]);
+continue;
+}
+virObjectUnref(domains[i]);
+domains[i] = NULL;
+}
+}
+
+if (cfg->tryShutdown) {
+GTimer *timer = NULL;
+for (i = 0; i < numDomains; i++) {
+if (domains[i] == NULL)
+continue;
+if (virDomainShutdown(domains[i]) < 0) {
+VIR_WARN("Unable to request graceful shutdown of '%s': %s",
+ virDomainGetName(domains[i]),
+ virGetLastErrorMessage());
+break;
+}
+}
+
+timer = g_timer_new();
+while (1) {
+bool anyRunning = false;
+for (i = 0; i < numDomains; i++) {
+if (!domains[i])
+continue;
 
-/* First we pause all VMs to make them stop dirtying
-   pages, etc. We remember if any VMs were paused so
-   we can restore that on resume. */
-for (i = 0; i < numDomains; i++) {
-flags[i] = VIR_DOMAIN_SAVE_RUNNING;
-if (virDomainGetState(domains[i], &state, NULL, 0) == 0) {
-if (state == VIR_DOMAIN_PAUSED)
-flags[i] = VIR_DOMAIN_SAVE_PAUSED;
+if (virDomainIsActive(domains[i]) == 1) {
+anyRunning = true;
+} else {
+virObjectUnref(domains[i]);
+domains[i] = NULL;
+   

[PATCH 06/17] qemu: capabilities: Retire QEMU_CAPS_OBJECT_JSON

2025-03-12 Thread Peter Krempa
Now that we dropped support for old qemus which didn't support JSON
props for -object we can retire the capability.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_capabilities.c | 3 +--
 src/qemu/qemu_capabilities.h | 2 +-
 tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml | 1 -
 tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml| 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_6.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml  | 1 -
 tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml| 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 1 -
 tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_armv7l.xml | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_loongarch64.xml| 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_sparc.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_aarch64+hvf.xml| 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_s390x.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64+amdsev.xml  | 1 -
 tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 1 -
 31 files changed, 2 insertions(+), 32 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index df9e630fbf..b6eb923763 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -622,7 +622,7 @@ VIR_ENUM_IMPL(virQEMUCaps,
   "vnc-power-control", /* QEMU_CAPS_VNC_POWER_CONTROL */
   "audiodev", /* X_QEMU_CAPS_AUDIODEV */
   "blockdev-backup", /* X_QEMU_CAPS_BLOCKDEV_BACKUP */
-  "object.qapified", /* QEMU_CAPS_OBJECT_JSON */
+  "object.qapified", /* X_QEMU_CAPS_OBJECT_JSON */
   "rotation-rate", /* QEMU_CAPS_ROTATION_RATE */

   /* 400 */
@@ -1590,7 +1590,6 @@ static struct virQEMUCapsStringFlags 
virQEMUCapsQMPSchemaQueries[] = {
 { "netdev_add/arg-type/type/^dgram", QEMU_CAPS_NETDEV_JSON },
 { "netdev_add/arg-type/+user", QEMU_CAPS_NETDEV_USER },
 { "netdev_add/arg-type/+stream/reconnect-ms", 
QEMU_CAPS_NETDEV_STREAM_RECONNECT_MILISECONDS },
-{ "object-add/arg-type/qom-type/^secret", QEMU_CAPS_OBJECT_JSON },
 { "object-add/arg-type/+sev-guest/kernel-hashes", 
QEMU_CAPS_SEV_GUEST_KERNEL_HASHES },
 { "object-add/arg-type/+iothread/thread-pool-max", 
QEMU_CAPS_IOTHREAD_THREAD_POOL_MAX },
 { "query-display-options/ret-type/+egl-headless/rendernode", 
QEMU_CAPS_EGL_HEADLESS_RENDERNODE },
diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index d4e5be6918..4e247193e3 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -601,7 +601,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for 
syntax-check */
 QEMU_CAPS_VNC_POWER_CONTROL, /* -vnc power-control option */
 X_QEMU_CAPS_AUDIODEV, /* -audiodev instead of QEMU_AUDIO_DRV */
 X_QEMU_CAPS_BLOCKDEV_BACKUP, /* qemu supports the blockdev-backup job */
-QEMU_CAPS_OBJECT_JSON, /* parameters for object-add are formally described 
*/
+X_QEMU_CAPS_OBJECT_JSON, /* parameters for object-add are formally 
described */
 QEMU_CAPS_ROTATION_RATE, /* scsi-disk / ide-drive rotation-rate prop */

 /* 400 */
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
index 3d4c715396..041b480b21 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml
@@ -84,7 +84,6 @@
   
   
   
-  
   
   
   
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml 
b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
index 951e16dffb..3adef15db3 100644
--- a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
+++ b/tests/qemucapabilitiesdata/caps_10.0.0_x86_64+amdsev.xml
@@ -148,7 +148,6 @@
   
   
   
-  
   
   
   
diff --git a/tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml 
b/tests/qemucapabilitiesdata/caps