[pve-devel] [PATCH docs v2] docs: fix typos

2024-07-25 Thread Maximiliano Sandoval
Signed-off-by: Maximiliano Sandoval 
---
Differences from v1:
 - Remove fixes in automatically generated files

 ha-manager.adoc|  2 +-
 notifications.adoc | 12 ++--
 pve-faq.adoc   |  2 +-
 pve-firewall.adoc  |  2 +-
 pve-network.adoc   |  4 ++--
 pvescheduler.adoc  |  2 +-
 pvesdn.adoc|  2 +-
 pveum.adoc |  2 +-
 qm.adoc|  2 +-
 vzdump.adoc|  2 +-
 10 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/ha-manager.adoc b/ha-manager.adoc
index 66a3b8f..fc20365 100644
--- a/ha-manager.adoc
+++ b/ha-manager.adoc
@@ -1080,7 +1080,7 @@ The CRS is currently used at the following scheduling 
points:
   new target node for the HA services in that group, matching the adapted
   priority constraints.
 
-- HA service stopped -> start transtion (opt-in). Requesting that a stopped
+- HA service stopped -> start transition (opt-in). Requesting that a stopped
   service should be started is an good opportunity to check for the best suited
   node as per the CRS algorithm, as moving stopped services is  cheaper to do
   than moving them started, especially if their disk volumes reside on shared
diff --git a/notifications.adoc b/notifications.adoc
index 25a9391..ca6e70c 100644
--- a/notifications.adoc
+++ b/notifications.adoc
@@ -68,10 +68,10 @@ the Postfix daemon.
 The configuration for Sendmail target plugins has the following options:
 
 * `mailto`: E-Mail address to which the notification shall be sent to. Can be
-set multiple times to accomodate multiple recipients.
+set multiple times to accommodate multiple recipients.
 * `mailto-user`: Users to which emails shall be sent to. The user's email
 address will be looked up in `users.cfg`. Can be set multiple times to
-accomodate multiple recipients.
+accommodate multiple recipients.
 * `author`: Sets the author of the E-Mail. Defaults to `Proxmox VE`.
 * `from-address`: Sets the from address of the E-Mail. If the parameter is not
 set, the plugin will fall back to the `email_from` setting from
@@ -106,10 +106,10 @@ in case of a failed mail delivery.
 The configuration for SMTP target plugins has the following options:
 
 * `mailto`: E-Mail address to which the notification shall be sent to. Can be
-set multiple times to accomodate multiple recipients.
+set multiple times to accommodate multiple recipients.
 * `mailto-user`: Users to which emails shall be sent to. The user's email
 address will be looked up in `users.cfg`. Can be set multiple times to
-accomodate multiple recipients.
+accommodate multiple recipients.
 * `author`: Sets the author of the E-Mail. Defaults to `Proxmox VE`.
 * `from-address`: Sets the From-addresss of the email. SMTP relays might 
require
 that this address is owned by the user in order to avoid spoofing.
@@ -221,7 +221,7 @@ a matcher must be true. Defaults to `all`.
 [[notification_matchers_calendar]]
 Calendar Matching Rules
 ~~~
-A calendar matcher matches the time when a notification is sent agaist a
+A calendar matcher matches the time when a notification is sent against a
 configurable schedule.
 
 * `match-calendar 8-12`
@@ -241,7 +241,7 @@ the node.
 If a matched metadata field does not exist, the notification will not be
 matched.
 For instance, a `match-field regex:hostname=.*` directive will only match
-notifications that have an arbitraty `hostname` metadata field, but will
+notifications that have an arbitrary `hostname` metadata field, but will
 not match if the field does not exist.
 
 [[notification_matchers_severity]]
diff --git a/pve-faq.adoc b/pve-faq.adoc
index 6f39c3d..63546cc 100644
--- a/pve-faq.adoc
+++ b/pve-faq.adoc
@@ -105,7 +105,7 @@ How can I upgrade {pve} to the next point release?::
 Minor version upgrades, for example upgrading from {pve} in version 7.1
 to 7.2 or 7.3, can be done just like any normal update.
 But you should still check the https://pve.proxmox.com/wiki/Roadmap[release 
notes]
-for any relevant noteable, or breaking change.
+for any relevant notable, or breaking change.
 +
 For the update itself use either the Web UI __Node -> Updates__ panel or
 through the CLI with:
diff --git a/pve-firewall.adoc b/pve-firewall.adoc
index 9fb4e46..b428703 100644
--- a/pve-firewall.adoc
+++ b/pve-firewall.adoc
@@ -648,7 +648,7 @@ 
https://wiki.nftables.org/wiki-nftables/index.php/What_is_nftables%3F[nftables]
 rather than iptables.
 
 WARNING: `proxmox-firewall` is currently in tech preview. There might be bugs 
or
-incompatibilies with the original firewall. It is currently not suited for
+incompatibilities with the original firewall. It is currently not suited for
 production use.
 
 This implementation uses the same configuration files and configuration format,
diff --git a/pve-network.adoc b/pve-network.adoc
index acdcf39..c74f878 100644
--- a/pve-network.adoc
+++ b/pve-network.adoc
@@ -16,7 +16,7 @@ protects you from errors.
 A Linux bridge interface (commonly called 'vmbrX') is needed to connect guests
 to the underlying p

Re: [pve-devel] [PATCH docs] docs: fix typos

2024-07-25 Thread Maximiliano Sandoval
Aaron Lauterer  writes:

> Thanks for this patch.
>
> A few things need to be changed in the actually commands/API/… definitions as 
> we
> autogenerate the manual pages.
>
> This is true for any file with `*-synopsis` or `*-opts*` in its name.
>
> Please send a v2 without the synopsis and opts files. For the synopsis and 
> opts
> files, find the source and fix it there.

V2 sent, thanks.

-- 
Maximiliano


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH pve-xtermjs v3] termproxy: allow to use unix sockets for auth requests

2024-07-25 Thread Wolfgang Bumiller
applied, thanks


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-xtermjs] termproxy: fix the command line help text

2024-07-25 Thread Wolfgang Bumiller
applied, thanks


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu 2/2] some more stable fixes for QEMU 9.0.2

2024-07-25 Thread Fiona Ebner
Fix the two issues reported in the community forum[0][1], i.e.
regression in LSI-53c895a controller and ignored boot order for USB
storage (only possible via custom arguments in Proxmox VE), both
causing boot failures, and pick up fixes for VirtIO, ARM emulation,
char IO device and a graph lock fix for the block layer.

The block-copy patches that serve as a preparation for fleecing are
moved to the extra folder, because the graph lock fix requires them
to be present first. They have been applied upstream in the meantime
and should drop out with the rebase on 9.1.

[0]: https://forum.proxmox.com/threads/149772/post-679433
[1]: https://forum.proxmox.com/threads/149772/post-683459

Signed-off-by: Fiona Ebner 
---
 ...d-support-for-sync-bitmap-mode-never.patch | 14 +--
 ...check-for-bitmap-mode-without-bitmap.patch |  2 +-
 .../0006-mirror-move-some-checks-to-qmp.patch |  2 +-
 ...ck-copy-before-write-fix-permission.patch} |  0
 ...-write-support-unligned-snapshot-di.patch} |  0
 ...-write-create-block_copy-bitmap-in-.patch} |  0
 ...backup-add-discard-source-parameter.patch} | 20 ++--
 ...e-de-initialization-of-vhost-user-de.patch | 92 ++
 ...Use-float_status-copy-in-sme_fmopa_s.patch | 43 +
 ...-Use-FPST_F16-for-SME-FMOPA-widening.patch | 62 +
 ...ion-and-honor-bootindex-again-for-le.patch | 60 
 ...5a-bump-instruction-limit-in-scripts.patch | 48 ++
 ...16-block-copy-Fix-missing-graph-lock.patch | 38 
 ...-do-not-operate-on-sources-from-fina.patch | 93 +++
 ...le-posix-make-locking-optiono-on-cre.patch |  6 +-
 ...e-bcs-bitmap-initialization-to-job-c.patch |  4 +-
 ...-Backup-add-backup-dump-block-driver.patch |  4 +-
 ...ckup-Proxmox-backup-patches-for-QEMU.patch |  4 +-
 ...k-driver-to-map-backup-archives-into.patch |  8 +-
 ...igrate-dirty-bitmap-state-via-savevm.patch |  2 +-
 ...-allow-specifying-minimum-cluster-s.patch} |  2 +-
 ...m-cluster-size-to-performance-optio.patch} |  0
 ...0046-PVE-backup-add-fleecing-option.patch} |  0
 ...e-error-when-copy-before-write-fail.patch} |  0
 debian/patches/series | 23 +++--
 25 files changed, 485 insertions(+), 42 deletions(-)
 rename debian/patches/{pve/0044-block-copy-before-write-fix-permission.patch 
=> extra/0007-block-copy-before-write-fix-permission.patch} (100%)
 rename 
debian/patches/{pve/0045-block-copy-before-write-support-unligned-snapshot-di.patch
 => extra/0008-block-copy-before-write-support-unligned-snapshot-di.patch} 
(100%)
 rename 
debian/patches/{pve/0046-block-copy-before-write-create-block_copy-bitmap-in-.patch
 => extra/0009-block-copy-before-write-create-block_copy-bitmap-in-.patch} 
(100%)
 rename 
debian/patches/{pve/0047-qapi-blockdev-backup-add-discard-source-parameter.patch
 => extra/0010-qapi-blockdev-backup-add-discard-source-parameter.patch} (96%)
 create mode 100644 
debian/patches/extra/0011-hw-virtio-Fix-the-de-initialization-of-vhost-user-de.patch
 create mode 100644 
debian/patches/extra/0012-target-arm-Use-float_status-copy-in-sme_fmopa_s.patch
 create mode 100644 
debian/patches/extra/0013-target-arm-Use-FPST_F16-for-SME-FMOPA-widening.patch
 create mode 100644 
debian/patches/extra/0014-scsi-fix-regression-and-honor-bootindex-again-for-le.patch
 create mode 100644 
debian/patches/extra/0015-hw-scsi-lsi53c895a-bump-instruction-limit-in-scripts.patch
 create mode 100644 
debian/patches/extra/0016-block-copy-Fix-missing-graph-lock.patch
 create mode 100644 
debian/patches/extra/0017-Revert-qemu-char-do-not-operate-on-sources-from-fina.patch
 rename 
debian/patches/pve/{0048-copy-before-write-allow-specifying-minimum-cluster-s.patch
 => 0044-copy-before-write-allow-specifying-minimum-cluster-s.patch} (99%)
 rename 
debian/patches/pve/{0049-backup-add-minimum-cluster-size-to-performance-optio.patch
 => 0045-backup-add-minimum-cluster-size-to-performance-optio.patch} (100%)
 rename debian/patches/pve/{0050-PVE-backup-add-fleecing-option.patch => 
0046-PVE-backup-add-fleecing-option.patch} (100%)
 rename 
debian/patches/pve/{0051-PVE-backup-improve-error-when-copy-before-write-fail.patch
 => 0047-PVE-backup-improve-error-when-copy-before-write-fail.patch} (100%)

diff --git 
a/debian/patches/bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
 
b/debian/patches/bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
index 392b8a2..0532896 100644
--- 
a/debian/patches/bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
+++ 
b/debian/patches/bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
@@ -258,7 +258,7 @@ index 1bdce3b657..0c5c72df2e 100644
   errp);
  if (!job) {
 diff --git a/blockdev.c b/blockdev.c
-index 057601dcf0..8682814a7a 100644
+index 4c33c3f5f0..f3e508a6a7 100644
 --- a/blockdev.c
 +++ b/blockdev.c
 @@ -2776,6 +2776,9 @@ static void blockdev_mirror_common(const char *job_id, 
BlockDriverState

[pve-devel] [PATCH qemu 1/2] update submodule and patches to QEMU 9.0.2

2024-07-25 Thread Fiona Ebner
Most relevant are some fixes for VirtIO and for ARM and i386
emulation. There also is a fix for VGA display to fix screen blanking,
which fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=4786

Signed-off-by: Fiona Ebner 
---
 ...d-support-for-sync-bitmap-mode-never.patch |  10 +-
 ...race-with-clients-disconnecting-earl.patch |   4 +-
 ...io-pci-fix-use-of-a-released-vector.patch} |   8 +-
 .../0006-virtio-gpu-fix-v2-migration.patch|  98 ---
 ...0007-hw-pflash-fix-block-write-start.patch |  59 -
 ...operand-size-for-DATA16-REX.W-POPCNT.patch |  51 
 ...ru-wrpkru-are-no-prefix-instructions.patch |  40 ---
 ...6-fix-feature-dependency-for-WAITPKG.patch |  33 ---
 ...move-compatibility-flags-for-VirtIO-.patch |  57 -
 ...t-monitor-use-aio_co_reschedule_self.patch |  53 
 ...ict-translation-disabled-alignment-c.patch |  51 
 ...-IRQs-a-chance-when-resetting-HF_INH.patch |  80 --
 ...r-v-Correct-kvm_hv_handle_exit-retur.patch |  60 -
 ...86-disable-jmp_opt-if-EFLAGS.RF-is-1.patch |  31 ---
 ...ingle-step-exception-after-MOV-or-PO.patch |  30 ---
 ...n-t-open-data_file-with-BDRV_O_NO_IO.patch | 107 
 ...names-only-when-explicitly-requested.patch | 241 --
 ...le-posix-make-locking-optiono-on-cre.patch |   6 +-
 ...ckup-Proxmox-backup-patches-for-QEMU.patch |   2 +-
 ...k-driver-to-map-backup-archives-into.patch |   8 +-
 ...igrate-dirty-bitmap-state-via-savevm.patch |   2 +-
 ...-backup-add-discard-source-parameter.patch |   2 +-
 ...e-allow-specifying-minimum-cluster-s.patch |   4 +-
 ...um-cluster-size-to-performance-optio.patch |   2 +-
 .../0050-PVE-backup-add-fleecing-option.patch |   2 +-
 debian/patches/series |  16 +-
 26 files changed, 26 insertions(+), 1031 deletions(-)
 rename 
debian/patches/extra/{0011-Revert-virtio-pci-fix-use-of-a-released-vector.patch 
=> 0006-Revert-virtio-pci-fix-use-of-a-released-vector.patch} (93%)
 delete mode 100644 debian/patches/extra/0006-virtio-gpu-fix-v2-migration.patch
 delete mode 100644 
debian/patches/extra/0007-hw-pflash-fix-block-write-start.patch
 delete mode 100644 
debian/patches/extra/0008-target-i386-fix-operand-size-for-DATA16-REX.W-POPCNT.patch
 delete mode 100644 
debian/patches/extra/0009-target-i386-rdpkru-wrpkru-are-no-prefix-instructions.patch
 delete mode 100644 
debian/patches/extra/0010-target-i386-fix-feature-dependency-for-WAITPKG.patch
 delete mode 100644 
debian/patches/extra/0012-hw-core-machine-move-compatibility-flags-for-VirtIO-.patch
 delete mode 100644 
debian/patches/extra/0013-Revert-monitor-use-aio_co_reschedule_self.patch
 delete mode 100644 
debian/patches/extra/0014-target-arm-Restrict-translation-disabled-alignment-c.patch
 delete mode 100644 
debian/patches/extra/0015-target-i386-Give-IRQs-a-chance-when-resetting-HF_INH.patch
 delete mode 100644 
debian/patches/extra/0016-target-i386-hyper-v-Correct-kvm_hv_handle_exit-retur.patch
 delete mode 100644 
debian/patches/extra/0017-target-i386-disable-jmp_opt-if-EFLAGS.RF-is-1.patch
 delete mode 100644 
debian/patches/extra/0018-target-i386-no-single-step-exception-after-MOV-or-PO.patch
 delete mode 100644 
debian/patches/extra/0019-qcow2-Don-t-open-data_file-with-BDRV_O_NO_IO.patch
 delete mode 100644 
debian/patches/extra/0020-block-Parse-filenames-only-when-explicitly-requested.patch

diff --git 
a/debian/patches/bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
 
b/debian/patches/bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
index 6789ac5..392b8a2 100644
--- 
a/debian/patches/bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
+++ 
b/debian/patches/bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
@@ -364,10 +364,10 @@ index d2201e27f4..cc1387ae02 100644
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
 diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 746d1694c2..45ab548dfe 100644
+index 4b18e01b85..0902b0a024 100644
 --- a/qapi/block-core.json
 +++ b/qapi/block-core.json
-@@ -2174,6 +2174,15 @@
+@@ -2170,6 +2170,15 @@
  # destination (all the disk, only the sectors allocated in the
  # topmost image, or only new I/O).
  #
@@ -383,7 +383,7 @@ index 746d1694c2..45ab548dfe 100644
  # @granularity: granularity of the dirty bitmap, default is 64K if the
  # image format doesn't have clusters, 4K if the clusters are
  # smaller than that, else the cluster size.  Must be a power of 2
-@@ -2216,7 +2225,9 @@
+@@ -2212,7 +2221,9 @@
  { 'struct': 'DriveMirror',
'data': { '*job-id': 'str', 'device': 'str', 'target': 'str',
  '*format': 'str', '*node-name': 'str', '*replaces': 'str',
@@ -394,7 +394,7 @@ index 746d1694c2..45ab548dfe 100644
  '*speed': 'int', '*granularity': 'uint32',
  '*buf-size': 'int', '*on-source-error': 'BlockdevOnError',
  '*on-target-error': 

Re: [pve-devel] [RFC storage 10/23] plugin: introduce new_backup_provider() method

2024-07-25 Thread Max Carrara
On Tue Jul 23, 2024 at 11:56 AM CEST, Fiona Ebner wrote:
> The new_backup_provider() method can be used by storage plugins for
> external backup providers. If the method returns a provider, Proxmox
> VE will use callbacks to that provider for backups and restore instead
> of using its usual backup/restore mechanisms.
>
> API age and version are both bumped.
>
> The backup provider API is split into two parts, both of which again
> need different implementations for VM and LXC guests:
>
> 1. Backup API
>
> There hook callbacks for the start/end/abort phases of guest backups
> as well as for start/end/abort phases of a whole backup job.
>
> The backup_get_mechanism() method is used to decide on the backup
> mechanism. Currently only 'nbd' for VMs and 'directory' for containers
> are possible. It also let's the plugin decide whether to use a bitmap
> for incremental VM backup or not.
>
> Next, there are methods for backing up guest and firewall
> configuration as well as for the backup mechanisms:
>
> - a container filesystem using a provided directory. The directory
>   contains an unchanging copy of the container's file system.
>
> - a VM disk using a provided NBD export. The export is an unchanging
>   copy of the VM's disk. Either the full image, or in case a bitmap is
>   used, the dirty parts of the image since the last time the bitmap
>   was used for a successful backup. Reading outside of the dirty parts
>   will result in an error. After backing up each part of the disk, it
>   should be discarded in the export to avoid unnecessary space usage
>   on the Proxmox VE side (there is an associated fleecing image).
>
> Finally, some helpers like getting the provider name or volume ID for
> the backup target, as well as for handling the backup log.
>
> 2. Restore API
>
> The restore_get_mechanism() method is used to decide on the restore
> mechanism. Currently, only 'qemu-img' for VMs and 'directory' and
> 'tar' for containers are possible.
>
> Next, methods for extracting the guest and firewall configuration and
> the implementations of the restore mechanism. It is enough to
> implement one restore mechanism per guest type of course:
>
> - for VMs, with the 'qemu-img' mechanism, the backup provider gives a
>   path to the disk image that will be restore. The path should be
>   something qemu-img can deal with, e.g. can also be an NBD URI.
>
> - for containers, with the 'directory' mechanism, the backup provider
>   gives the path to a directory with the full filesystem structure of
>   the container.
>
> - for containers, with the 'tar' mechanism, the backup provider gives
>   the path to a (potentially compressed) tar archive with the full
>   filesystem structure of the container.
>
> For VMs, there also is a restore_qemu_get_device_info() helper
> required, to get the disks included in the backup and their sizes.
>
> See the PVE::BackupProvider::Plugin module for the full API
> documentation.
>
> Signed-off-by: Fiona Ebner 

Some overall thoughts:

1.  I'm really, really happy to see documentation in this module here,
that's fantastic! :)

While the contents of the docs seem fine, I would suggest you used
POD instead. You can find an example in one of my recent series. [1]
I mainly prefer POD solely because it's what Perl uses; it also
indirectly makes sure we all use the same kind of format for
documenting our Perl code.

Of course, we've currently not decided on any particular format, but
because the opportunity arose, I wanted to pitch POD here
nevertheless. ;)

2.  I would personally prefer a namespace like `PVE::Backup::Provider`
instead of `PVE::BackupProvider`, simply because it leaves room for
further packages and reduces churn in the long term, IMO.

The same goes for backup provider plugins - IMO namespacing them
like e.g. `PVE::Backup::Provider::Plugin::Foo` where `Foo` is a
(concrete) plugin.

While this seems long or somewhat excessive, I think it enforces a
clear package / module hierarchy and keeps things tidier in the long
term, and those couple extra keystrokes don't really hurt anyone.

There are some more comments inline, though I want to mention that I
really like your work so far! :)

[1]: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064703.html

> ---
>  src/PVE/BackupProvider/Makefile  |   6 +
>  src/PVE/BackupProvider/Plugin.pm | 343 +++
>  src/PVE/Makefile |   1 +
>  src/PVE/Storage.pm   |  12 +-
>  src/PVE/Storage/Plugin.pm|  15 ++
>  5 files changed, 375 insertions(+), 2 deletions(-)
>  create mode 100644 src/PVE/BackupProvider/Makefile
>  create mode 100644 src/PVE/BackupProvider/Plugin.pm
>
> diff --git a/src/PVE/BackupProvider/Makefile b/src/PVE/BackupProvider/Makefile
> new file mode 100644
> index 000..53cccac
> --- /dev/null
> +++ b/src/PVE/BackupProvider/Makefile
> @@ -0,0 +1,6 @@
> +SOURCES = Plugin.pm
> +
> +.PHONY: insta

[pve-devel] [PATCH proxmox-perl-rs] update to proxmox-log 0.2

2024-07-25 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
 common/src/logger.rs  | 4 +---
 pmg-rs/Cargo.toml | 2 +-
 pmg-rs/debian/control | 2 +-
 pve-rs/Cargo.toml | 2 +-
 pve-rs/debian/control | 2 +-
 5 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/common/src/logger.rs b/common/src/logger.rs
index b54e658..1c8940b 100644
--- a/common/src/logger.rs
+++ b/common/src/logger.rs
@@ -5,9 +5,7 @@ pub fn init(env_var_name: &str, default_log_level: &str) {
 if let Err(e) = default_log_level
 .parse()
 .map_err(Error::from)
-.and_then(|default_log_level| {
-proxmox_log::init_logger(env_var_name, default_log_level, "")
-})
+.and_then(|default_log_level| proxmox_log::init_logger(env_var_name, 
default_log_level))
 {
 eprintln!("could not set up env_logger: {e:?}");
 }
diff --git a/pmg-rs/Cargo.toml b/pmg-rs/Cargo.toml
index 6296b64..2af92ee 100644
--- a/pmg-rs/Cargo.toml
+++ b/pmg-rs/Cargo.toml
@@ -35,7 +35,7 @@ proxmox-apt-api-types = "1.0"
 proxmox-config-digest = "0.1"
 proxmox-http = { version = "0.9", features = ["client-sync", "client-trait"] }
 proxmox-http-error = "0.1.0"
-proxmox-log = "0.1"
+proxmox-log = "0.2"
 proxmox-notify = "0.4"
 proxmox-subscription = "0.4"
 proxmox-sys = "0.6"
diff --git a/pmg-rs/debian/control b/pmg-rs/debian/control
index 24ecdc4..2904b55 100644
--- a/pmg-rs/debian/control
+++ b/pmg-rs/debian/control
@@ -25,7 +25,7 @@ Build-Depends: cargo:native ,
librust-proxmox-http-0.9+client-trait-dev,
librust-proxmox-http-0.9+default-dev,
librust-proxmox-http-error-0.1+default-dev,
-   librust-proxmox-log-0.1+default-dev,
+   librust-proxmox-log-0.2+default-dev,
librust-proxmox-notify-0.4+default-dev,
librust-proxmox-subscription-0.4+default-dev,
librust-proxmox-sys-0.6+default-dev,
diff --git a/pve-rs/Cargo.toml b/pve-rs/Cargo.toml
index ddae988..b6abf6c 100644
--- a/pve-rs/Cargo.toml
+++ b/pve-rs/Cargo.toml
@@ -36,7 +36,7 @@ proxmox-apt-api-types = "1.0"
 proxmox-config-digest = "0.1"
 proxmox-http = { version = "0.9", features = ["client-sync", "client-trait"] }
 proxmox-http-error = "0.1.0"
-proxmox-log = "0.1"
+proxmox-log = "0.2"
 proxmox-notify = { version = "0.4", features = ["pve-context"] }
 proxmox-openid = "0.10"
 proxmox-resource-scheduling = "0.3.0"
diff --git a/pve-rs/debian/control b/pve-rs/debian/control
index 0fabbe9..6191fdb 100644
--- a/pve-rs/debian/control
+++ b/pve-rs/debian/control
@@ -23,7 +23,7 @@ Build-Depends: cargo:native ,
librust-proxmox-http-0.9+client-trait-dev,
librust-proxmox-http-0.9+default-dev,
librust-proxmox-http-error-0.1+default-dev,
-   librust-proxmox-log-0.1+default-dev,
+   librust-proxmox-log-0.2+default-dev,
librust-proxmox-notify-0.4+default-dev,
librust-proxmox-notify-0.4+pve-context-dev,
librust-proxmox-openid-0.10+default-dev,
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server] resume: bump timeout for query-status

2024-07-25 Thread Fiona Ebner
As reported in the community forum [0], after migration, the VM might
not immediately be able to respond to QMP commands, which means the VM
could fail to resume and stay in paused state on the target.

The reason is that activating the block drives in QEMU can take a bit
of time. For example, it might be necessary to invalidate the caches
(where for raw devices a flush might be needed) and the request
alignment and size of the block device needs to be queried.

In [0], an external Ceph cluster with krbd is used, and the initial
read to the block device after migration, for probing the request
alignment, takes a bit over 10 seconds[1]. Use 60 seconds as the new
timeout to be on the safe side for the future.

All callers are inside workers or via the 'qm' CLI command, so bumping
beyond 30 seconds is fine.

[0]: https://forum.proxmox.com/threads/149610/

Signed-off-by: Fiona Ebner 
---

Changes in v2:
* improve commit message with new findings from the forum thread

 PVE/QemuServer.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index bf59b091..9e840912 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6461,7 +6461,9 @@ sub vm_resume {
 my ($vmid, $skiplock, $nocheck) = @_;
 
 PVE::QemuConfig->lock_config($vmid, sub {
-   my $res = mon_cmd($vmid, 'query-status');
+   # After migration, the VM might not immediately be able to respond to 
QMP commands, because
+   # activating the block devices might take a bit of time.
+   my $res = mon_cmd($vmid, 'query-status', timeout => 60);
my $resume_cmd = 'cont';
my $reset = 0;
my $conf;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server] resume: bump timeout for query-status

2024-07-25 Thread Fiona Ebner
Sent a v2 improving the commit message with new findings:
https://lists.proxmox.com/pipermail/pve-devel/2024-July/064908.html


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage] volume import: assume target API version is at least 9

2024-07-25 Thread Thomas Lamprecht
For the record, we talked about this in person for a bit with the following
outcome:

- there was a bit of a misunderstanding w.r.t. my heavy exaggeration for the
  point's sake, I really did not mean that as accusation at all, but that's
  now talked out

- it's a good point that the package that was included in the first 7.0 ISO
  release was already new enough API version wise, and we also checked the
  package archives, and there we also got only new enough libpve-storage-perl
  package versions, so we can keep the change as is.
  We also agreed that it would be great to mention such analysis in the commit
  message the next time, and I know Fiona is very thorough with that stuff, and
  that this time it was just not mentioned due to the difference in how upgrade
  requirements and recommendations got interpreted by her and me, so I mention
  this mostly for other readers, as this applies to all of us.

- we might want to document this expectation w.r.t API backward compat more
  definitively in a more approachable way, but should ensure that its clarified
  that this is for developers, not users, to avoid users thinking its always
  fine to upgrade from an outdated point release to a newer major release.

tl;dr: got sorted out and change can be kept as is


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC storage 10/23] plugin: introduce new_backup_provider() method

2024-07-25 Thread Fiona Ebner
Am 25.07.24 um 11:48 schrieb Max Carrara:
> On Tue Jul 23, 2024 at 11:56 AM CEST, Fiona Ebner wrote:
>> Signed-off-by: Fiona Ebner 
> 
> Some overall thoughts:
> 
> 1.  I'm really, really happy to see documentation in this module here,
> that's fantastic! :)
> 
> While the contents of the docs seem fine, I would suggest you used
> POD instead. You can find an example in one of my recent series. [1]
> I mainly prefer POD solely because it's what Perl uses; it also
> indirectly makes sure we all use the same kind of format for
> documenting our Perl code.
> 
> Of course, we've currently not decided on any particular format, but
> because the opportunity arose, I wanted to pitch POD here
> nevertheless. ;)
> 

I'll look into it for v2. Agreed, following a standard for documenting
an API module has its merits.

> 2.  I would personally prefer a namespace like `PVE::Backup::Provider`
> instead of `PVE::BackupProvider`, simply because it leaves room for
> further packages and reduces churn in the long term, IMO.
> 

There's a risk though that PVE::Backup::Provider and PVE::Backup::Foo
are unrelated things that have no real business sharing a namespace.

> The same goes for backup provider plugins - IMO namespacing them
> like e.g. `PVE::Backup::Provider::Plugin::Foo` where `Foo` is a
> (concrete) plugin.
> 

The BackupProvider namespace is already intended for the plugins, adding
an extra level with "Plugin" would just bloat the module names,
especially if we decide to go the same route as for storage plugins and
have a "Custom" sub-namespace.

> While this seems long or somewhat excessive, I think it enforces a
> clear package / module hierarchy and keeps things tidier in the long
> term, and those couple extra keystrokes don't really hurt anyone.
> 

I get where you're coming from, I just feel like BackupProvider might be
better as its own separate thing, containing the plugins for the
specific purpose. But I don't have a strong opinion about it, and am
fine making such changes if other developers prefer it too :)

> The above two methods - `backup_nbd` and `backup_directory` - is there
> perhaps a way to merge them? I'm not sure if what I'm having in mind
> here is actually feasible, but what I mean is "making the method
> agnostic to the type of backup". As in, perhaps pass a hash that
> contains a `type` key for the type of backup being made, and instead of
> having long method signatures, include the remaining parameters as the
> remaining keys. For example:
> 
> {
> 'type' => 'lxc-dir',  # type names are just examples here
> 'directory' => '/foo/bar/baz',
> 'bandwidth_limit' => 42,
> ...
> }
> 
> {
> 'type' => 'vm-nbd',
> 'device_name' => '...',
> 'nbd_path' => '...',
> ...
> }
> 
> You get the point :P
> 
> IMO it would make it easier to extend later, and also make it more
> straightforward to introduce new parameters / deprecate old ones, while
> the method signature stays stable otherwise.
> 
> The same goes for the different cleanup methods further down below;
> instead of having a separate method for each "type of cleanup being
> performed", let the implementor handle it according to the data the
> method receives.
> 
> IMHO I think it's best to be completely agnostic over VM / LXC backups
> (and their specific types) wherever possible and let the data describe
> what's going on instead.
> 

The point about extensibility is a good one. The API wouldn't need to
change even if we implement new mechanisms. But thinking about it some
more, is there anything really gained? Because we will not force plugins
to implement the methods for new mechanisms of course, they can just
continue supporting what they support. Each mechanism will have its own
specific set of parameters, so throwing everything into a catch-all
method and hash might make it too generic.

Or think about the documentation for the single backup method: it would
become super lengthy and describe all backup mechanisms, while a plugin
most likely only cares about a single one and would have an easier time
with a method that captures that mechanism's parameters explicitly.
Won't the end result be making the implementors life slightly harder,
because it first needs to extract the parameters for the specific mechanism?

> For the specific types we can always then provide helper functions that
> handle common cases that implementors can use.
> 
> Extending on my namespace idea above, those helpers could then land in
> e.g. `PVE::Backup::Provider::Common`, `PVE::Backup::Provider::Common::LXC`,
> etc.
> 

Could you give an example for such a helper? I rather feel like things
like libnbd will be that, i.e. for accessing the NBD export, not sure if
we can create common helpers that would benefit multiple providers, each
has their own backend they'll need to talk to after all and might have
quite different needs.



Re: [pve-devel] [RFC storage 10/23] plugin: introduce new_backup_provider() method

2024-07-25 Thread Fiona Ebner
Am 25.07.24 um 15:11 schrieb Fiona Ebner:
> Am 25.07.24 um 11:48 schrieb Max Carrara:
>> For the specific types we can always then provide helper functions that
>> handle common cases that implementors can use.
>>
>> Extending on my namespace idea above, those helpers could then land in
>> e.g. `PVE::Backup::Provider::Common`, `PVE::Backup::Provider::Common::LXC`,
>> etc.
>>
> 
> Could you give an example for such a helper? I rather feel like things
> like libnbd will be that, i.e. for accessing the NBD export, not sure if
> we can create common helpers that would benefit multiple providers, each
> has their own backend they'll need to talk to after all and might have
> quite different needs.

Actually, this just gave me an idea (not for a helper method though :P).
We can provide a simpler mechanism than NBD, by just exposing the block
device with qemu-nbd ourselves first and also reading the bitmap
ourselves first, then passing the path to the block device and the
bitmap info to the provider.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC storage 10/23] plugin: introduce new_backup_provider() method

2024-07-25 Thread Max Carrara
On Thu Jul 25, 2024 at 3:11 PM CEST, Fiona Ebner wrote:
> Am 25.07.24 um 11:48 schrieb Max Carrara:
> > On Tue Jul 23, 2024 at 11:56 AM CEST, Fiona Ebner wrote:
> >> Signed-off-by: Fiona Ebner 
> > 
> > Some overall thoughts:
> > 
> > 1.  I'm really, really happy to see documentation in this module here,
> > that's fantastic! :)
> > 
> > While the contents of the docs seem fine, I would suggest you used
> > POD instead. You can find an example in one of my recent series. [1]
> > I mainly prefer POD solely because it's what Perl uses; it also
> > indirectly makes sure we all use the same kind of format for
> > documenting our Perl code.
> > 
> > Of course, we've currently not decided on any particular format, but
> > because the opportunity arose, I wanted to pitch POD here
> > nevertheless. ;)
> > 
>
> I'll look into it for v2. Agreed, following a standard for documenting
> an API module has its merits.

Sweet!

>
> > 2.  I would personally prefer a namespace like `PVE::Backup::Provider`
> > instead of `PVE::BackupProvider`, simply because it leaves room for
> > further packages and reduces churn in the long term, IMO.
> > 
>
> There's a risk though that PVE::Backup::Provider and PVE::Backup::Foo
> are unrelated things that have no real business sharing a namespace.

Hmm, fair point - on second thought, `PVE::Backup` indeed seems a bit
too generic.

>
> > The same goes for backup provider plugins - IMO namespacing them
> > like e.g. `PVE::Backup::Provider::Plugin::Foo` where `Foo` is a
> > (concrete) plugin.
> > 
>
> The BackupProvider namespace is already intended for the plugins, adding
> an extra level with "Plugin" would just bloat the module names,
> especially if we decide to go the same route as for storage plugins and
> have a "Custom" sub-namespace.

I understand what you mean, yeah. Would perhaps something like
`PVE::BackupProvider::Plugin::*` be better?

The reason why I'm suggesting this is because in `PVE::Storage::*`,
every plugin lives alongside `Plugin.pm`, even though the extra
directory wouldn't really hurt IMO. E.g. `PVE::Storage::DirPlugin` would
then be `PVE::Storage::Plugin::Dir`.

The reason why I'm suggesting *something* like that here is to reduce
some clutter and simply keep related things grouped. Also, IMO it's
better to consider the overall package structure beforehand, simply to
avoid any churn in the future - something I've noticed while poking
around the storage API.

Maybe I'm being a bit pedantic here (tbh I probably am), but it's just
something that I wanted to mention anyhow.

>
> > While this seems long or somewhat excessive, I think it enforces a
> > clear package / module hierarchy and keeps things tidier in the long
> > term, and those couple extra keystrokes don't really hurt anyone.
> > 
>
> I get where you're coming from, I just feel like BackupProvider might be
> better as its own separate thing, containing the plugins for the
> specific purpose. But I don't have a strong opinion about it, and am
> fine making such changes if other developers prefer it too :)

I agree now that BackupProvider should remain on its own; I otherwise
don't have any strong opinions about it either (though I would like to
shove plugins one directory level deeper ;P). As I said, I'm probably
just a bit pedantic here; feel free to disregard these suggestions if
you think they're not applicable :)

>
> > The above two methods - `backup_nbd` and `backup_directory` - is there
> > perhaps a way to merge them? I'm not sure if what I'm having in mind
> > here is actually feasible, but what I mean is "making the method
> > agnostic to the type of backup". As in, perhaps pass a hash that
> > contains a `type` key for the type of backup being made, and instead of
> > having long method signatures, include the remaining parameters as the
> > remaining keys. For example:
> > 
> > {
> > 'type' => 'lxc-dir',  # type names are just examples here
> > 'directory' => '/foo/bar/baz',
> > 'bandwidth_limit' => 42,
> > ...
> > }
> > 
> > {
> > 'type' => 'vm-nbd',
> > 'device_name' => '...',
> > 'nbd_path' => '...',
> > ...
> > }
> > 
> > You get the point :P
> > 
> > IMO it would make it easier to extend later, and also make it more
> > straightforward to introduce new parameters / deprecate old ones, while
> > the method signature stays stable otherwise.
> > 
> > The same goes for the different cleanup methods further down below;
> > instead of having a separate method for each "type of cleanup being
> > performed", let the implementor handle it according to the data the
> > method receives.
> > 
> > IMHO I think it's best to be completely agnostic over VM / LXC backups
> > (and their specific types) wherever possible and let the data describe
> > what's going on instead.
> > 
>
> The point about extensibility is a good one. The API wouldn't need to
> change even if we implement new mechanisms. But thinking ab

Re: [pve-devel] applied: [PATCH pve-xtermjs] termproxy: fix the command line help text

2024-07-25 Thread Thomas Lamprecht
On 25/07/2024 10:58, Wolfgang Bumiller wrote:
> applied, thanks
> 

this was a good stop-gap but as it wasn't much work I followed this up with
fixing the implementation. As while it's just a internal tool, it's still
nicer to have it behave as all CLI tools:

https://git.proxmox.com/?p=pve-xtermjs.git;a=commitdiff;h=4b8568cb4ff20a87bf2859bb0eb63c5afd3831d7;hp=69400e983e949a4f1a307a755b02af5fcd3efe65


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-xtermjs v3] termproxy: allow to use unix sockets for auth requests

2024-07-25 Thread Thomas Lamprecht
On 24/07/2024 13:33, Dietmar Maurer wrote:
> Remove ureq, because it does not support unix sockets.
> 
> Signed-off-by: Dietmar Maurer 
> ---
> 
> Changes sinve v2:
>   split out the command line help text change into patch:
>   [PATCH pve-xtermjs] termproxy: fix the command line help text
> 
> Changes since v1:
> 
>  - use extra --authsocket cli option

FYI: I renamed this and the pre-existing --authport option to --auth-socket and
--auth-port, respectively, with the latter getting a fallback to the old variant
for backward compat.

And yeah I know that this in borderline useless as it's just an internal tool,
but I got a few spare minutes in the train where starting something more 
involved
was not justified, and making this consistent with the `--port-as-fd` option
and our modern CLI option style in general felt like an OK clean-up to do.

Anyhow, now bumped and uploaded as proxmox-termproxy in version 1.1.0.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel