Re: [pve-devel] Storage plugin questions

2025-03-21 Thread Fiona Ebner
Am 18.03.25 um 11:32 schrieb Max Schettler via pve-devel:
> - is it possible to integrate with the webinterface, to allow creation of a 
> custom storage provider from there, instead of the CLI?

Not yet and we can't give a timeline/promises, but others requested this
as well, see:
https://lore.proxmox.com/pve-devel/d8e8ozgx1hi7.qh6mqs4gl...@proxmox.com/

> - when an image is deleted, are derived snapshots supposed to be removed as 
> well? Depending on the storage type this is a technical necessity, but if 
> not, should it still be done? From the perspective of the storage provider 
> it'd be difficult to prevent stale snapshots of deleted VMs to persist.

Yes, storage snapshots (the ones created via volume_snapshot()) in
Proxmox VE belong to the volume they are created with. And removing the
volume should also remove those snapshots.

Best Regards,
Fiona


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox] fix #6143: notify: allow overriding notification templates

2025-03-21 Thread Alexander Zeidler
Previously, notification templates could be modified by the user, but
these were overwritten again with installing newer package versions of
pve-manager and proxmox-backup.

Now override templates can be created cluster-wide in the path
“/etc/{pve,proxmox-backup}/notification-templates/{namespace}”, which
are used with priority. The folder structure has to be created and
populated manually (e.g. /etc/pve/notification-templates/default).

If override templates are not existing or their rendering fails, the
vendor templates in
"/usr/share/{pve-manager,proxmox-backup}/templates/default/" are used.

Sequence: [override html -> vendor html ->] override txt -> vendor txt

An error is only returned if none of the template candidates could be
used. Using an override template gets not logged.

Signed-off-by: Alexander Zeidler 
---
This patch was previously sent as RFC and has now all suggestions from
Lukas Wagner implemented:
https://lore.proxmox.com/pve-devel/20250313151734.258337-1-a.zeid...@proxmox.com/

This patch should not be merged until the existing PVE and PBS
templates have been audited. For PVE this is currently being done, for
PBS a patch is already sent:
https://lore.proxmox.com/pbs-devel/20250321122521.198725-1-l.wag...@proxmox.com/

The documentation changes for PVE and PBS (steps for override template
creation, list of variables and helpers) will be sent in separate
patches after the above mentioned audit is completed.


 proxmox-notify/src/context/mod.rs  |   4 +-
 proxmox-notify/src/context/pbs.rs  |   9 +-
 proxmox-notify/src/context/pve.rs  |  10 +-
 proxmox-notify/src/context/test.rs |   2 +
 proxmox-notify/src/renderer/mod.rs | 154 -
 5 files changed, 129 insertions(+), 50 deletions(-)

diff --git a/proxmox-notify/src/context/mod.rs 
b/proxmox-notify/src/context/mod.rs
index c0a5a13b..8b6e2c43 100644
--- a/proxmox-notify/src/context/mod.rs
+++ b/proxmox-notify/src/context/mod.rs
@@ -1,6 +1,7 @@
 use std::fmt::Debug;
 use std::sync::Mutex;
 
+use crate::renderer::TemplateSource;
 use crate::Error;
 
 #[cfg(any(feature = "pve-context", feature = "pbs-context"))]
@@ -24,11 +25,12 @@ pub trait Context: Send + Sync + Debug {
 fn http_proxy_config(&self) -> Option;
 /// Return default config for built-in targets/matchers.
 fn default_config(&self) -> &'static str;
-/// Lookup a template in a certain (optional) namespace
+/// Return the path of `filename` from `source` and a certain (optional) 
`namespace`
 fn lookup_template(
 &self,
 filename: &str,
 namespace: Option<&str>,
+source: TemplateSource,
 ) -> Result, Error>;
 }
 
diff --git a/proxmox-notify/src/context/pbs.rs 
b/proxmox-notify/src/context/pbs.rs
index e8106c57..3e5da59c 100644
--- a/proxmox-notify/src/context/pbs.rs
+++ b/proxmox-notify/src/context/pbs.rs
@@ -7,6 +7,7 @@ use proxmox_schema::{ObjectSchema, Schema, StringSchema};
 use proxmox_section_config::{SectionConfig, SectionConfigPlugin};
 
 use crate::context::{common, Context};
+use crate::renderer::TemplateSource;
 use crate::Error;
 
 const PBS_USER_CFG_FILENAME: &str = "/etc/proxmox-backup/user.cfg";
@@ -109,8 +110,14 @@ impl Context for PBSContext {
 &self,
 filename: &str,
 namespace: Option<&str>,
+source: TemplateSource,
 ) -> Result, Error> {
-let path = Path::new("/usr/share/proxmox-backup/templates")
+let path = match source {
+TemplateSource::Vendor => "/usr/share/proxmox-backup/templates",
+TemplateSource::Override => 
"/etc/proxmox-backup/notification-templates",
+};
+
+let path = Path::new(&path)
 .join(namespace.unwrap_or("default"))
 .join(filename);
 
diff --git a/proxmox-notify/src/context/pve.rs 
b/proxmox-notify/src/context/pve.rs
index d49ab27c..a97cce26 100644
--- a/proxmox-notify/src/context/pve.rs
+++ b/proxmox-notify/src/context/pve.rs
@@ -1,4 +1,5 @@
 use crate::context::{common, Context};
+use crate::renderer::TemplateSource;
 use crate::Error;
 use std::path::Path;
 
@@ -58,10 +59,17 @@ impl Context for PVEContext {
 &self,
 filename: &str,
 namespace: Option<&str>,
+source: TemplateSource,
 ) -> Result, Error> {
-let path = Path::new("/usr/share/pve-manager/templates")
+let path = match source {
+TemplateSource::Vendor => "/usr/share/pve-manager/templates",
+TemplateSource::Override => "/etc/pve/notification-templates",
+};
+
+let path = Path::new(&path)
 .join(namespace.unwrap_or("default"))
 .join(filename);
+
 let template_string = proxmox_sys::fs::file_read_optional_string(path)
 .map_err(|err| Error::Generic(format!("could not load template: 
{err}")))?;
 Ok(template_string)
diff --git a/proxmox-notify/src/context/test.rs 
b/proxmox-notify/src/context/test.rs
index 5df25d05..13ef6eae 100644
--- a/proxmox

[pve-devel] [PATCH qemu v5 02/32] PVE backup: get device info: allow caller to specify filter for which devices use fleecing

2025-03-21 Thread Fiona Ebner
For providing snapshot-access to external backup providers, EFI and
TPM also need an associated fleecing image. The new caller will thus
need a different filter.

Signed-off-by: Fiona Ebner 
---
 pve-backup.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index b76d1bbe22..b22064a64e 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -717,7 +717,7 @@ static void create_backup_jobs_bh(void *opaque) {
 /*
  * EFI disk and TPM state are small and it's just not worth setting up 
fleecing for them.
  */
-static bool device_uses_fleecing(const char *device_id)
+static bool fleecing_no_efi_tpm(const char *device_id)
 {
 return strncmp(device_id, "drive-efidisk", 13) && strncmp(device_id, 
"drive-tpmstate", 14);
 }
@@ -729,7 +729,7 @@ static bool device_uses_fleecing(const char *device_id)
  */
 static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
 const char *devlist,
-bool fleecing,
+bool (*device_uses_fleecing)(const char*),
 Error **errp)
 {
 gchar **devs = NULL;
@@ -755,7 +755,7 @@ static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
 di->bs = bs;
 di->device_name = g_strdup(bdrv_get_device_name(bs));
 
-if (fleecing && device_uses_fleecing(*d)) {
+if (device_uses_fleecing && device_uses_fleecing(*d)) {
 g_autofree gchar *fleecing_devid = g_strconcat(*d, 
"-fleecing", NULL);
 BlockBackend *fleecing_blk = blk_by_name(fleecing_devid);
 if (!fleecing_blk) {
@@ -858,7 +858,8 @@ UuidInfo coroutine_fn *qmp_backup(
 format = has_format ? format : BACKUP_FORMAT_VMA;
 
 bdrv_graph_co_rdlock();
-di_list = get_device_info(devlist, has_fleecing && fleecing, &local_err);
+di_list = get_device_info(devlist, (has_fleecing && fleecing) ? 
fleecing_no_efi_tpm : NULL,
+  &local_err);
 bdrv_graph_co_rdunlock();
 if (local_err) {
 error_propagate(errp, local_err);
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v5 20/32] backup restore: external: hardening check for untrusted source image

2025-03-21 Thread Fiona Ebner
Suggested-by: Fabian Grünbichler 
Signed-off-by: Fiona Ebner 
---

Changes in v5:
* adapt to changed file_size_info() signature

 PVE/QemuServer.pm | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index cb8447c9..fcec60b2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7257,6 +7257,12 @@ sub restore_external_archive {
$backup_provider->restore_vm_volume_init($volname, $storeid, 
$d->{devname}, {});
my $source_path = $info->{'qemu-img-path'}
or die "did not get source image path from backup provider\n";
+
+   print "importing drive '$d->{devname}' from '$source_path'\n";
+
+   # safety check for untrusted source image
+   PVE::Storage::file_size_info($source_path, undef, 'auto-detect', 1);
+
eval {
qemu_img_convert(
$source_path, $d->{volid}, $d->{size}, undef, 0, 
$options->{bwlimit});
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v5 16/32] backup: fleecing: use exact size when allocating non-raw fleecing images

2025-03-21 Thread Fiona Ebner
A non-1KiB aligned source image could cause issues when used with
qcow2 fleecing images, e.g. for an image with size 4.5 KiB:
> Size mismatch for 'drive-tpmstate0-backup-fleecing' - sector count 10 != 9

Raw images are attached to QEMU with an explicit 'size' argument, so
rounding up before allocation doesn't matter, but it does for qcow2.

Signed-off-by: Fiona Ebner 
---

New in v5.

 PVE/VZDump/QemuServer.pm | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 4a258895..6562aba6 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -558,7 +558,15 @@ my sub allocate_fleecing_images {
my $name = "vm-$vmid-fleece-$n";
$name .= ".$format" if $scfg->{path};
 
-   my $size = PVE::Tools::convert_size($di->{'block-node-size'}, 
'b' => 'kb');
+   my $size;
+   if ($format ne 'raw') {
+   # Since non-raw images cannot be attached with an explicit 
'size' parameter to
+   # QEMU later, pass the exact size to the storage layer. 
This makes qcow2
+   # fleecing images work for non-1KiB-aligned source images.
+   $size = $di->{'block-node-size'}/1024;
+   } else {
+   $size = PVE::Tools::convert_size($di->{'block-node-size'}, 
'b' => 'kb');
+   }
 
$di->{'fleece-volid'} = PVE::Storage::vdisk_alloc(
$self->{storecfg}, $fleecing_storeid, $vmid, $format, 
$name, $size);
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v5 22/32] backup: support 'missing-recreated' bitmap action

2025-03-21 Thread Fiona Ebner
A new 'missing-recreated' action was added on the QEMU side.

Signed-off-by: Fiona Ebner 
---

New in v5.

 PVE/VZDump/QemuServer.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index c2d64192..068046ae 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -316,6 +316,8 @@ my $bitmap_action_to_human = sub {
}
 } elsif ($action eq "invalid") {
return "existing bitmap was invalid and has been cleared";
+} elsif ($action eq "missing-recreated") {
+   return "expected bitmap was missing and has been recreated";
 } else {
return "unknown";
 }
@@ -1372,6 +1374,7 @@ my sub backup_access_to_volume_info {
'new' => 'new',
'used' => 'reuse',
'invalid' => 'new',
+   'missing-recreated' => 'new',
 };
 
 my $volumes = {};
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage v5 09/32] plugin: introduce new_backup_provider() method

2025-03-21 Thread Fiona Ebner
The new_backup_provider() method can be used by storage plugins for
external backup providers. If the method returns a provider, Proxmox
VE will use callbacks to that provider for backups and restore instead
of using its usual backup/restore mechanisms.

The backup provider API is split into two parts, both of which again
need different implementations for VM and LXC guests:

1. Backup API

There are two hook callback functions, namely:
1. job_hook() is called during the start/end/abort phases of the whole
   backup job.
2. backup_hook() is called during the start/end/abort phases of the
   backup of an individual guest. There also is a 'prepare' phase
   useful for container backups, because the backup method for
   containers itself is executed in the user namespace context
   associated to the container.

The backup_get_mechanism() method is used to decide on the backup
mechanism. Currently, 'file-handle' or 'nbd' for VMs, and 'directory'
for containers is possible. The method also let's the plugin indicate
whether to use a bitmap for incremental VM backup or not. It is enough
to implement one mechanism for VMs and one mechanism for containers.

Next, there are methods for backing up the guest's configuration and
data, backup_vm() for VM backup and backup_container() for container
backup, with the latter running

Finally, some helpers like getting the provider name or volume ID for
the backup target, as well as for handling the backup log.

1.1 Backup Mechanisms

VM:

Access to the data on the VM's disk from the time the backup started
is made available via a so-called "snapshot access". This is either
the full image, or in case a bitmap is used, the dirty parts of the
image since the last time the bitmap was used for a successful backup.
Reading outside of the dirty parts will result in an error. After
backing up each part of the disk, it should be discarded in the export
to avoid unnecessary space usage on the Proxmox VE side (there is an
associated fleecing image).

VM mechanism 'file-handle':

The snapshot access is exposed via a file descriptor. A subroutine to
read the dirty regions for incremental backup is provided as well.

VM mechanism 'nbd':

The snapshot access and, if used, bitmap are exported via NBD.

Container mechanism 'directory':

A copy or snapshot of the container's filesystem state is made
available as a directory. The method is executed inside the user
namespace associated to the container.

2. Restore API

The restore_get_mechanism() method is used to decide on the restore
mechanism. Currently, 'qemu-img' for VMs, and 'directory' or 'tar' for
containers are possible. It is enough to implement one mechanism for
VMs and one mechanism for containers.

Next, methods for extracting the guest and firewall configuration and
the implementations of the restore mechanism via a pair of methods: an
init method, for making the data available to Proxmox VE and a cleanup
method that is called after restore.

For VMs, there also is a restore_vm_get_device_info() helper required,
to get the disks included in the backup and their sizes.

2.1. Restore Mechanisms

VM mechanism 'qemu-img':

The backup provider gives a path to the disk image that will be
restored. The path needs to be something 'qemu-img' can deal with,
e.g. can also be an NBD URI or similar.

Container mechanism 'directory':

The backup provider gives the path to a directory with the full
filesystem structure of the container.

Container mechanism 'tar':

The backup provider gives the path to a (potentially compressed) tar
archive with the full filesystem structure of the container.

See the PVE::BackupProvider::Plugin module for the full API
documentation.

Signed-off-by: Fiona Ebner 
---

Changes in v5:
* Split API version+age bump into own commit.
* Replace 'block-device' mechanism with 'file-handle'.

 src/PVE/BackupProvider/Makefile|3 +
 src/PVE/BackupProvider/Plugin/Base.pm  | 1161 
 src/PVE/BackupProvider/Plugin/Makefile |5 +
 src/PVE/Makefile   |1 +
 src/PVE/Storage.pm |8 +
 src/PVE/Storage/Plugin.pm  |   15 +
 6 files changed, 1193 insertions(+)
 create mode 100644 src/PVE/BackupProvider/Makefile
 create mode 100644 src/PVE/BackupProvider/Plugin/Base.pm
 create mode 100644 src/PVE/BackupProvider/Plugin/Makefile

diff --git a/src/PVE/BackupProvider/Makefile b/src/PVE/BackupProvider/Makefile
new file mode 100644
index 000..f018cef
--- /dev/null
+++ b/src/PVE/BackupProvider/Makefile
@@ -0,0 +1,3 @@
+.PHONY: install
+install:
+   make -C Plugin install
diff --git a/src/PVE/BackupProvider/Plugin/Base.pm 
b/src/PVE/BackupProvider/Plugin/Base.pm
new file mode 100644
index 000..b31a28f
--- /dev/null
+++ b/src/PVE/BackupProvider/Plugin/Base.pm
@@ -0,0 +1,1161 @@
+package PVE::BackupProvider::Plugin::Base;
+
+use strict;
+use warnings;
+
+=pod
+
+=head1 NAME
+
+PVE::BackupProvider::Plugin::Base - Base Plugin for Backup Provi

[pve-devel] [PATCH qemu-server v5 18/32] backup: implement backup for external providers

2025-03-21 Thread Fiona Ebner
The state of the VM's disk images at the time the backup is started is
preserved via a snapshot-access block node. Old data is moved to the
fleecing image when new guest writes come in. The snapshot-access
block node, as well as the associated bitmap in case of incremental
backup, will be made available to the external provider. They are
exported via NBD and for 'nbd' mechanism, the NBD socket path is
passed to the provider, while for 'file-handle' mechanism, the NBD
export is made accessible via a file handle and the bitmap information
is made available via a $next_dirty_region->() function. For
'file-handle', the 'nbdinfo' and 'nbdfuse' binaries are required.

The provider can indicate that it wants to do an incremental backup by
returning the bitmap ID that was used for a previous backup and it
will then be told if the bitmap was newly created (either first backup
or old bitmap was invalid) or if the bitmap can be reused.

The provider then reads the parts of the NBD or virtual file it needs,
either the full disk for full backup, or the dirty parts according to
the bitmap for incremental backup. The bitmap has to be respected,
reads to other parts of the image will return an error. After backing
up each part of the disk, it should be discarded in the export to
avoid unnecessary space usage in the fleecing image (requires the
storage underlying the fleecing image to support discard too).

Signed-off-by: Fiona Ebner 
---

Changes in v5:
* Query for backup-access API support from QEMU via
  'query-proxmox-support' QMP command
* Check nbd/parameters directory instead of nbd/coresize to determine
  whether the module is loaded. coresize was just picked on a whim,
  parameters seems cleaner.
* Switch from 'block-device' to 'file-handle' mechansim. Using
  nbdfuse to mount the NBD export as a virtual file.
* Fix typo: comma instead of semicolon after an assignment.
* Add missing import for QMPHelpers module.

 PVE/VZDump/QemuServer.pm | 400 ++-
 1 file changed, 398 insertions(+), 2 deletions(-)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 65f01791..0b22b34d 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -3,12 +3,15 @@ package PVE::VZDump::QemuServer;
 use strict;
 use warnings;
 
+use Fcntl qw(:mode);
 use File::Basename;
-use File::Path;
+use File::Path qw(make_path remove_tree);
+use File::stat qw();
 use IO::File;
 use IPC::Open3;
 use JSON;
 use POSIX qw(EINTR EAGAIN);
+use Time::HiRes qw(usleep);
 
 use PVE::Cluster qw(cfs_read_file);
 use PVE::INotify;
@@ -20,7 +23,7 @@ use PVE::QMPClient;
 use PVE::Storage::Plugin;
 use PVE::Storage::PBSPlugin;
 use PVE::Storage;
-use PVE::Tools;
+use PVE::Tools qw(run_command);
 use PVE::VZDump;
 use PVE::Format qw(render_duration render_bytes);
 
@@ -30,6 +33,7 @@ use PVE::QemuServer::Drive qw(checked_volume_format);
 use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::QMPHelpers;
 
 use base qw (PVE::VZDump::Plugin);
 
@@ -284,6 +288,8 @@ sub archive {
 
 if ($self->{vzdump}->{opts}->{pbs}) {
$self->archive_pbs($task, $vmid);
+} elsif ($self->{vzdump}->{'backup-provider'}) {
+   $self->archive_external($task, $vmid);
 } else {
$self->archive_vma($task, $vmid, $filename, $comp);
 }
@@ -1148,11 +1154,90 @@ sub snapshot {
 # nothing to do
 }
 
+my sub cleanup_file_handles {
+my ($self, $file_handles) = @_;
+
+for my $file_handle ($file_handles->@*) {
+   close($file_handle) or $self->log('warn', "unable to close file handle 
- $!");
+}
+}
+
+my sub cleanup_nbd_mounts {
+my ($self, $info) = @_;
+
+for my $mount_point (keys $info->%*) {
+   my $pid_file = delete($info->{$mount_point}->{'pid-file'});
+   unlink($pid_file) or $self->log('warn', "unable to unlink '$pid_file' - 
$!");
+   # Do a lazy unmount, because the target might still be busy even if the 
file handle was
+   # already closed.
+   eval { run_command(['fusermount', '-z', '-u', $mount_point ]); };
+   if (my $err = $@) {
+   delete $info->{$mount_point};
+   $self->log('warn', "unable to unmount NBD backup source 
'$mount_point' - $err");
+   }
+}
+
+# Wait for the unmount before cleaning up child PIDs to avoid 'nbdfuse' 
processes being
+# interrupted by the signals issued there.
+my $waited;
+my $wait_limit = 50; # 5 seconds
+for ($waited = 0; $waited < $wait_limit && scalar(keys $info->%*); 
$waited++) {
+   for my $mount_point (keys $info->%*) {
+   delete($info->{$mount_point}) if !-e 
$info->{$mount_point}->{'virtual-file'};
+   eval { remove_tree($mount_point); };
+   }
+   usleep(100_000);
+}
+# just informational, remaining child processes will be killed afterwards
+$self->loginfo("unable to gracefully cleanup NBD fuse mounts") if 
scalar(keys $info->%*) != 0;
+}
+
+my s

[pve-devel] [PATCH container v5 27/32] external restore: don't use 'one-file-system' tar flag when restoring from a directory

2025-03-21 Thread Fiona Ebner
This gives backup providers more freedom, e.g. mount backed-up mount
point volumes individually.

Suggested-by: Fabian Grünbichler 
Signed-off-by: Fiona Ebner 
---
 src/PVE/LXC/Create.pm | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 2b3b10e..8c54d6b 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -167,11 +167,15 @@ sub restore_external_archive {
or die "did not get path to archive directory from backup 
provider\n";
die "not a directory '$directory'" if !-d $directory;
 
+   # Give backup provider more freedom, e.g. mount backed-up mount 
point volumes
+   # individually.
+   my @flags = grep { $_ ne '--one-file-system' } 
@PVE::Storage::Plugin::COMMON_TAR_FLAGS;
+
my $create_cmd = [
'tar',
'cpf',
'-',
-   @PVE::Storage::Plugin::COMMON_TAR_FLAGS,
+   @flags,
"--directory=$directory",
'.',
];
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v5 26/32] backup: implement restore for external providers

2025-03-21 Thread Fiona Ebner
First, the provider is asked about what restore mechanism to use.
Currently, 'directory' and 'tar' are possible. The 'directory'
mechanism is for restoring from a directory containing the container's
full filesystem structure, which is restored by piping from a
privileged tar cf - to tar xf - in the associated user namespace. The
'tar' mechanism is for restoring a (potentially compressed) tar file
containing the container's full filesystem structure.

The new functions are copied and adapted from the existing ones for
PBS or tar and it might be worth to factor out the common parts.

Restore of containers as privileged are prohibited, because the
archives from an external provider are considered less trusted than
from Proxmox VE storages. If ever allowing that in the future, at
least it would be worth extracting the tar archive in a restricted
context (e.g. user namespace with ID mapped mount or seccomp).

Signed-off-by: Fiona Ebner 
---

Changes in v5:
* fixup restore when there is a bwlimit - nested array ref handling
  for command

 src/PVE/LXC/Create.pm | 149 ++
 1 file changed, 149 insertions(+)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 8c8cb9a..2b3b10e 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -7,6 +7,7 @@ use File::Path;
 use Fcntl;
 
 use PVE::RPCEnvironment;
+use PVE::RESTEnvironment qw(log_warn);
 use PVE::Storage::PBSPlugin;
 use PVE::Storage::Plugin;
 use PVE::Storage;
@@ -26,6 +27,24 @@ sub restore_archive {
if ($scfg->{type} eq 'pbs') {
return restore_proxmox_backup_archive($storage_cfg, $archive, 
$rootdir, $conf, $no_unpack_error, $bwlimit);
}
+   if (PVE::Storage::storage_has_feature($storage_cfg, $storeid, 
'backup-provider')) {
+   my $log_function = sub {
+   my ($log_level, $message) = @_;
+   my $prefix = $log_level eq 'err' ? 'ERROR' : uc($log_level);
+   print "$prefix: $message\n";
+   };
+   my $backup_provider =
+   PVE::Storage::new_backup_provider($storage_cfg, $storeid, 
$log_function);
+   return restore_external_archive(
+   $backup_provider,
+   $storeid,
+   $volname,
+   $rootdir,
+   $conf,
+   $no_unpack_error,
+   $bwlimit,
+   );
+   }
 }
 
 $archive = PVE::Storage::abs_filesystem_path($storage_cfg, $archive) if 
$archive ne '-';
@@ -127,6 +146,62 @@ sub restore_tar_archive {
 die $err if $err && !$no_unpack_error;
 }
 
+sub restore_external_archive {
+my ($backup_provider, $storeid, $volname, $rootdir, $conf, 
$no_unpack_error, $bwlimit) = @_;
+
+die "refusing to restore privileged container backup from external 
source\n"
+   if !$conf->{unprivileged};
+
+my ($mechanism, $vmtype) = 
$backup_provider->restore_get_mechanism($volname, $storeid);
+die "cannot restore non-LXC guest of type '$vmtype'\n" if $vmtype ne 'lxc';
+
+my $info = $backup_provider->restore_container_init($volname, $storeid, 
{});
+eval {
+   if ($mechanism eq 'tar') {
+   my $tar_path = $info->{'tar-path'}
+   or die "did not get path to tar file from backup provider\n";
+   die "not a regular file '$tar_path'" if !-f $tar_path;
+   restore_tar_archive($tar_path, $rootdir, $conf, $no_unpack_error, 
$bwlimit);
+   } elsif ($mechanism eq 'directory') {
+   my $directory = $info->{'archive-directory'}
+   or die "did not get path to archive directory from backup 
provider\n";
+   die "not a directory '$directory'" if !-d $directory;
+
+   my $create_cmd = [
+   'tar',
+   'cpf',
+   '-',
+   @PVE::Storage::Plugin::COMMON_TAR_FLAGS,
+   "--directory=$directory",
+   '.',
+   ];
+
+   my $extract_cmd = restore_tar_archive_command($conf, undef, 
$rootdir, $bwlimit);
+
+   my $cmd;
+   # if there is a bandwidth limit, the command is already a nested 
array reference
+   if (ref($extract_cmd) eq 'ARRAY' && ref($extract_cmd->[0]) eq 
'ARRAY') {
+   $cmd = [$create_cmd, $extract_cmd->@*];
+   } else {
+   $cmd = [$create_cmd, $extract_cmd];
+   }
+
+   eval { PVE::Tools::run_command($cmd); };
+   die $@ if $@ && !$no_unpack_error;
+   } else {
+   die "mechanism '$mechanism' requested by backup provider is not 
supported for LXCs\n";
+   }
+};
+my $err = $@;
+eval { $backup_provider->restore_container_cleanup($volname, $storeid, 
{}); };
+if (my $cleanup_err = $@) {
+   die $cleanup_err if !$err;
+   warn $cleanup_err;
+}
+die $err if $err;
+
+}
+
 sub recover_config {
 my ($storage_cfg, $volid, $vmid) = @_;
 
@@ -135,6 +210,8 @@ sub recover_config {
my $scfg = PVE::Storage::st

[pve-devel] [PATCH storage v5 10/32] config api/plugins: let plugins define sensitive properties themselves

2025-03-21 Thread Fiona Ebner
Hard-coding a list of sensitive properties means that custom plugins
cannot define their own sensitive properties for the on_add/on_update
hooks.

Have plugins declare the list of their sensitive properties in the
plugin data. For backwards compatibility, return the previously
hard-coded list if no such declaration is present.

Signed-off-by: Fiona Ebner 
---

New in v5.

 src/PVE/API2/Storage/Config.pm   |  4 ++--
 src/PVE/Storage/BTRFSPlugin.pm   |  1 +
 src/PVE/Storage/CIFSPlugin.pm|  1 +
 src/PVE/Storage/CephFSPlugin.pm  |  1 +
 src/PVE/Storage/DirPlugin.pm |  1 +
 src/PVE/Storage/ESXiPlugin.pm|  1 +
 src/PVE/Storage/GlusterfsPlugin.pm   |  1 +
 src/PVE/Storage/ISCSIDirectPlugin.pm |  1 +
 src/PVE/Storage/ISCSIPlugin.pm   |  1 +
 src/PVE/Storage/LVMPlugin.pm |  1 +
 src/PVE/Storage/LvmThinPlugin.pm |  1 +
 src/PVE/Storage/NFSPlugin.pm |  1 +
 src/PVE/Storage/PBSPlugin.pm |  5 +
 src/PVE/Storage/Plugin.pm| 12 
 src/PVE/Storage/RBDPlugin.pm |  1 +
 src/PVE/Storage/ZFSPlugin.pm |  1 +
 src/PVE/Storage/ZFSPoolPlugin.pm |  1 +
 17 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/src/PVE/API2/Storage/Config.pm b/src/PVE/API2/Storage/Config.pm
index e04b6ab..7facc62 100755
--- a/src/PVE/API2/Storage/Config.pm
+++ b/src/PVE/API2/Storage/Config.pm
@@ -190,8 +190,6 @@ __PACKAGE__->register_method ({
return &$api_storage_config($cfg, $param->{storage});
 }});
 
-my $sensitive_params = [qw(password encryption-key master-pubkey keyring)];
-
 __PACKAGE__->register_method ({
 name => 'create',
 protected => 1,
@@ -239,6 +237,7 @@ __PACKAGE__->register_method ({
# fix me in section config create never need an empty entity.
delete $param->{nodes} if !$param->{nodes};
 
+   my $sensitive_params = 
PVE::Storage::Plugin::sensitive_properties($type);
my $sensitive = extract_sensitive_params($param, $sensitive_params, []);
 
my $plugin = PVE::Storage::Plugin->lookup($type);
@@ -344,6 +343,7 @@ __PACKAGE__->register_method ({
my $scfg = PVE::Storage::storage_config($cfg, $storeid);
$type = $scfg->{type};
 
+   my $sensitive_params = 
PVE::Storage::Plugin::sensitive_properties($type);
my $sensitive = extract_sensitive_params($param, $sensitive_params, 
$delete);
 
my $plugin = PVE::Storage::Plugin->lookup($type);
diff --git a/src/PVE/Storage/BTRFSPlugin.pm b/src/PVE/Storage/BTRFSPlugin.pm
index 1966b6f..5ed910d 100644
--- a/src/PVE/Storage/BTRFSPlugin.pm
+++ b/src/PVE/Storage/BTRFSPlugin.pm
@@ -45,6 +45,7 @@ sub plugindata {
{ images => 1, rootdir => 1 },
],
format => [ { raw => 1, subvol => 1 }, 'raw', ],
+   'sensitive-properties' => {},
 };
 }
 
diff --git a/src/PVE/Storage/CIFSPlugin.pm b/src/PVE/Storage/CIFSPlugin.pm
index 475065a..f47861e 100644
--- a/src/PVE/Storage/CIFSPlugin.pm
+++ b/src/PVE/Storage/CIFSPlugin.pm
@@ -101,6 +101,7 @@ sub plugindata {
content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1,
   backup => 1, snippets => 1, import => 1}, { images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
+   'sensitive-properties' => { password => 1 },
 };
 }
 
diff --git a/src/PVE/Storage/CephFSPlugin.pm b/src/PVE/Storage/CephFSPlugin.pm
index 36c64ea..73edecb 100644
--- a/src/PVE/Storage/CephFSPlugin.pm
+++ b/src/PVE/Storage/CephFSPlugin.pm
@@ -118,6 +118,7 @@ sub plugindata {
 return {
content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1, 
import => 1 },
 { backup => 1 }],
+   'sensitive-properties' => { keyring => 1 },
 };
 }
 
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index fb23e0a..532701b 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -26,6 +26,7 @@ sub plugindata {
content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup 
=> 1, snippets => 1, none => 1, import => 1 },
 { images => 1,  rootdir => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 } , 'raw' ],
+   'sensitive-properties' => {},
 };
 }
 
diff --git a/src/PVE/Storage/ESXiPlugin.pm b/src/PVE/Storage/ESXiPlugin.pm
index c8412c4..6131c51 100644
--- a/src/PVE/Storage/ESXiPlugin.pm
+++ b/src/PVE/Storage/ESXiPlugin.pm
@@ -31,6 +31,7 @@ sub plugindata {
 return {
content => [ { import => 1 }, { import => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
+   'sensitive-properties' => { password => 1 },
 };
 }
 
diff --git a/src/PVE/Storage/GlusterfsPlugin.pm 
b/src/PVE/Storage/GlusterfsPlugin.pm
index 9d17180..18493cb 100644
--- a/src/PVE/Storage/GlusterfsPlugin.pm
+++ b/src/PVE/Storage/GlusterfsPlugin.pm
@@ -100,6 +100,7 @@ sub plugindata {
content => [ { images => 1, vztmpl => 1

[pve-devel] [PATCH container v5 25/32] backup: implement backup for external providers

2025-03-21 Thread Fiona Ebner
The filesystem structure is made available as a directory in a
consistent manner (with details depending on the vzdump backup mode)
just like for regular backup via tar.

The backup_container() method of the backup provider is executed in
a user namespace with the container's ID mapping applied. This allows
the backup provider to see the container's filesystem from the
container's perspective.

The 'prepare' phase of the backup hook is executed right before and
allows the backup provider to prepare for the (usually) unprivileged
execution context in the user namespace.

The backup provider needs to back up the guest and firewall
configuration and the filesystem structure of the container, honoring
file exclusions and the bandwidth limit.

Signed-off-by: Fiona Ebner 
---
 src/PVE/VZDump/LXC.pm | 40 +++-
 1 file changed, 39 insertions(+), 1 deletion(-)

diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index 9029387..bd32f01 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE/VZDump/LXC.pm
@@ -11,6 +11,7 @@ use PVE::Cluster qw(cfs_read_file);
 use PVE::GuestHelpers;
 use PVE::INotify;
 use PVE::LXC::Config;
+use PVE::LXC::Namespaces;
 use PVE::LXC;
 use PVE::Storage;
 use PVE::Tools;
@@ -124,6 +125,7 @@ sub prepare {
 
 my ($id_map, $root_uid, $root_gid) = PVE::LXC::parse_id_maps($conf);
 $task->{userns_cmd} = PVE::LXC::userns_command($id_map);
+$task->{id_map} = $id_map;
 $task->{root_uid} = $root_uid;
 $task->{root_gid} = $root_gid;
 
@@ -373,7 +375,43 @@ sub archive {
 my $userns_cmd = $task->{userns_cmd};
 my $findexcl = $self->{vzdump}->{findexcl};
 
-if ($self->{vzdump}->{opts}->{pbs}) {
+if (my $backup_provider = $self->{vzdump}->{'backup-provider'}) {
+   $self->loginfo("starting external backup via " . 
$backup_provider->provider_name());
+
+   if (!scalar($task->{id_map}->@*) || $task->{root_uid} == 0 || 
$task->{root_gid} == 0) {
+   $self->log("warn", "external backup of privileged container can 
only be restored as"
+   ." unprivileged which might not work in all cases");
+   }
+
+   my ($mechanism) = $backup_provider->backup_get_mechanism($vmid, 'lxc');
+   die "mechanism '$mechanism' requested by backup provider is not 
supported for containers\n"
+   if $mechanism ne 'directory';
+
+   $self->loginfo("using backup mechanism '$mechanism'");
+
+   my $guest_config = 
PVE::Tools::file_get_contents("$tmpdir/etc/vzdump/pct.conf");
+   my $firewall_file = "$tmpdir/etc/vzdump/pct.fw";
+
+   my $info = {
+   directory => $snapdir,
+   sources => [@sources],
+   'backup-user-id' => $task->{root_uid},
+   };
+   $info->{'firewall-config'} = 
PVE::Tools::file_get_contents($firewall_file)
+   if -e $firewall_file;
+   $info->{'bandwidth-limit'} = $opts->{bwlimit} * 1024 if 
$opts->{bwlimit};
+
+   $backup_provider->backup_hook('prepare', $vmid, 'lxc', $info);
+
+   if (scalar($task->{id_map}->@*)) {
+   PVE::LXC::Namespaces::run_in_userns(
+   sub { $backup_provider->backup_container($vmid, $guest_config, 
$findexcl, $info); },
+   $task->{id_map},
+   );
+   } else {
+   $backup_provider->backup_container($vmid, $guest_config, $findexcl, 
$info);
+   }
+} elsif ($self->{vzdump}->{opts}->{pbs}) {
 
my $param = [];
push @$param, "pct.conf:$tmpdir/etc/vzdump/pct.conf";
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH] fix #4499: prevent superblock read errors for containers on CIFS storage

2025-03-21 Thread Hugo via pve-devel
--- Begin Message ---
Signed-off-by: Hugo 
---
 src/PVE/Storage/CIFSPlugin.pm | 32 +---
 1 file changed, 25 insertions(+), 7 deletions(-)

diff --git a/src/PVE/Storage/CIFSPlugin.pm b/src/PVE/Storage/CIFSPlugin.pm
index 475065a..9a8a02c 100644
--- a/src/PVE/Storage/CIFSPlugin.pm
+++ b/src/PVE/Storage/CIFSPlugin.pm
@@ -78,13 +78,17 @@ sub cifs_mount : prototype($) {
 my $cmd = ['/bin/mount', '-t', 'cifs', $source, $mountpoint, '-o', 'soft', 
'-o'];
 
 if (my $cred_file = get_cred_file($storeid)) {
-   push @$cmd, "username=$user", '-o', "credentials=$cred_file";
-   push @$cmd, '-o', "domain=$domain" if defined($domain);
+push @$cmd, "username=$user", '-o', "credentials=$cred_file";
+push @$cmd, '-o', "domain=$domain" if defined($domain);
 } else {
-   push @$cmd, 'guest,username=guest';
+push @$cmd, 'guest,username=guest';
 }
 
-push @$cmd, '-o', defined($smbver) ? "vers=$smbver" : "vers=default";
+push @$cmd, '-o', defined($smbver) ? "vers=$smbver" : "vers=3.0";
+
+# Fix inode issue
+push @$cmd, '-o', 'noserverino,cache=none,actimeo=0';
+
 push @$cmd, '-o', $options if $options;
 
 run_command($cmd, errmsg => "mount error");
@@ -248,13 +252,27 @@ sub deactivate_storage {
 my ($class, $storeid, $scfg, $cache) = @_;
 
 $cache->{mountdata} = PVE::ProcFSTools::parse_proc_mounts()
-   if !$cache->{mountdata};
+if !$cache->{mountdata};
 
 my $path = $scfg->{path};
 
 if (cifs_is_mounted($scfg, $cache->{mountdata})) {
-   my $cmd = ['/bin/umount', $path];
-   run_command($cmd, errmsg => 'umount error');
+system('/bin/sync');
+
+my $output = `lsof +D $path 2>/dev/null`;
+if ($output) {
+warn "Warning: Processes still using CIFS mount at $path. Trying 
lazy unmount...\n";
+system('/bin/umount', '-l', $path);
+} else {
+system('/bin/umount', $path);
+}
+
+
+sleep 1; # Give time for unmount
+if (cifs_is_mounted($scfg, PVE::ProcFSTools::parse_proc_mounts())) {
+warn "Unmount failed, forcing unmount...\n";
+system('/bin/umount', '-f', $path);
+}
 }
 }
 
-- 
2.48.1


--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH proxmox-i18n] updated Japanese language ja.po

2025-03-21 Thread Thomas Lamprecht
Am 21.03.25 um 00:28 schrieb rib...@skrbn.sakura.ne.jp:
> 
> --- ja.po.org 2025-03-20 22:22:38.644618693 +0900
> +++ ja.po 2025-03-20 23:21:15.603578886 +0900

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH proxmox-i18n v2 2/2] make: add proxmox-datacenter-manager translations

2025-03-21 Thread Maximiliano Sandoval


Thomas Lamprecht  writes:

> btw. instead of pinging a series unconditionally a better approach might
> be to use that as an opportunity to self-test the whole series, i.e. apply
> them locally and see if all works out; that would have made most issues of
> this series visible.
> If that succeeded, then asking someone directly for an end-to-end test
> and/or review before sending the second ping or so out would also be more
> productive IMO, as a tested-by gives maintainers at least some basic
> assurance that the whole thing works in principle...
> this fails with errors like
>
> proxmox-yew-comp.pot:185: duplicate message definition...
> proxmox-widget-toolkit.pot:2081: ...this is the location of the first 
> definition
>
> Which is a result from the former file having this combined:
>
> msgid "Warning"
> msgid_plural "Warnings"
> msgstr[0] ""
> msgstr[1] ""
>
> While the latter has it split:
>
> msgid "Warning"
> msgstr ""
>
> msgid "Warnings"
> msgstr ""
>
> There are others if this is fixed. I see no reference to other patches or 
> steps
> we need to apply/take before this series, so what's the idea here?

The requirements are:

- 
https://lore.proxmox.com/pdm-devel/20250122150132.426276-1-m.sando...@proxmox.com/
- 
https://lore.proxmox.com/yew-devel/20250127094651.139204-1-m.sando...@proxmox.com/


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] Storage plugin questions

2025-03-21 Thread Thomas Lamprecht
Am 21.03.25 um 15:45 schrieb Fiona Ebner:
> Am 18.03.25 um 11:32 schrieb Max Schettler via pve-devel:
>> - is it possible to integrate with the webinterface, to allow creation of a 
>> custom storage provider from there, instead of the CLI?
> Not yet and we can't give a timeline/promises, but others requested this
> as well, see:
> https://lore.proxmox.com/pve-devel/d8e8ozgx1hi7.qh6mqs4gl...@proxmox.com/

FYI: This is an even older standing request [0] and the idea is to add
something similar to what we have for ACME DNS-challenge plugins.

[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=3420


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage v5] fix #957: iscsi: improve iscsi_test_portal logic

2025-03-21 Thread Friedrich Weber
Patch looks good to me too, ran a few tests and didn't see anything
unexpected, hence:

Tested-by: Friedrich Weber 
Reviewed-by: Friedrich Weber 

Thank you!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu v5 04/32] PVE backup: implement bitmap support for external backup access

2025-03-21 Thread Fiona Ebner
There can be one dirty bitmap for each backup target ID (which are
tracked in the backup_access_bitmaps hash table). The QMP user can
specify the ID of the bitmap it likes to use. This ID is then compared
to the current one for the given target. If they match, the bitmap is
re-used (should it still exist on the drive, otherwise re-created). If
there is a mismatch, the old bitmap is removed and a new one is
created.

The return value of the QMP command includes information about what
bitmap action was taken. Similar to what the query-backup QMP command
returns for regular backup. It also includes the bitmap name and
associated block node, so the management layer can then set up an NBD
export with the bitmap.

While the backup access is active, a background bitmap is also
required. This is necessary to implement bitmap handling according to
the original reference [0]. In particular:

- in the error case, new writes since the backup access was set up are
  in the background bitmap. Because of failure, the previously tracked
  writes from the backup access bitmap are still required too. Thus,
  the bitmap is merged with the background bitmap to get all new
  writes since the last backup.

- in the success case, continue tracking for the next incremental
  backup in the backup access bitmap. New writes since the backup
  access was set up are in the background bitmap. Because the backup
  was successfully, clear the backup access bitmap and merge back the
  background bitmap to get only the new writes.

Since QEMU cannot know if the backup was successful or not (except if
failure already happens during the setup QMP command), the management
layer needs to tell it via the teardown QMP command.

The bitmap action is also recorded in the device info now.

[0]: 
https://lore.kernel.org/qemu-devel/b68833dd-8864-4d72-7c61-c134a9835...@ya.ru/

Signed-off-by: Fiona Ebner 
---

Changes in v5:
* If bitmap with the expected, previously used name is missing, do not
  return PBS_BITMAP_ACTION_INVALID, but simply PBS_BITMAP_ACTION_NEW.
  This more closely aligns the handling with what is done for PBS. A
  new status should be used for this, see the next patch.

 pve-backup.c | 175 ++-
 pve-backup.h |   2 +-
 qapi/block-core.json |  22 +-
 system/runstate.c|   2 +-
 4 files changed, 193 insertions(+), 8 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index 8f0f9921ab..bcbb9eeb5b 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -15,6 +15,7 @@
 #include "qapi/qmp/qdict.h"
 #include "qapi/qmp/qerror.h"
 #include "qemu/cutils.h"
+#include "qemu/error-report.h"
 
 #if defined(CONFIG_MALLOC_TRIM)
 #include 
@@ -41,6 +42,7 @@
  */
 
 const char *PBS_BITMAP_NAME = "pbs-incremental-dirty-bitmap";
+const char *BACKGROUND_BITMAP_NAME = "backup-access-background-bitmap";
 
 static struct PVEBackupState {
 struct {
@@ -72,6 +74,7 @@ static struct PVEBackupState {
 CoMutex backup_mutex;
 CoMutex dump_callback_mutex;
 char *target_id;
+GHashTable *backup_access_bitmaps; // key=target_id, value=bitmap_name
 } backup_state;
 
 static void pvebackup_init(void)
@@ -99,6 +102,8 @@ typedef struct PVEBackupDevInfo {
 char* device_name;
 int completed_ret; // INT_MAX if not completed
 BdrvDirtyBitmap *bitmap;
+BdrvDirtyBitmap *background_bitmap; // used for external backup access
+PBSBitmapAction bitmap_action;
 BlockDriverState *target;
 BlockJob *job;
 } PVEBackupDevInfo;
@@ -362,6 +367,67 @@ static void coroutine_fn pvebackup_co_complete_stream(void 
*opaque)
 qemu_co_mutex_unlock(&backup_state.backup_mutex);
 }
 
+/*
+ * New writes since the backup access was set up are in the background bitmap. 
Because of failure,
+ * the previously tracked writes in di->bitmap are still required too. Thus, 
merge with the
+ * background bitmap to get all new writes since the last backup.
+ */
+static void handle_backup_access_bitmaps_in_error_case(PVEBackupDevInfo *di)
+{
+Error *local_err = NULL;
+
+if (di->bs && di->background_bitmap) {
+bdrv_drained_begin(di->bs);
+if (di->bitmap) {
+bdrv_enable_dirty_bitmap(di->bitmap);
+if (!bdrv_merge_dirty_bitmap(di->bitmap, di->background_bitmap, 
NULL, &local_err)) {
+warn_report("backup access: %s - could not merge bitmaps in 
error path - %s",
+di->device_name,
+local_err ? error_get_pretty(local_err) : "unknown 
error");
+/*
+ * Could not merge, drop original bitmap too.
+ */
+bdrv_release_dirty_bitmap(di->bitmap);
+}
+} else {
+warn_report("backup access: %s - expected bitmap not present", 
di->device_name);
+}
+bdrv_release_dirty_bitmap(di->background_bitmap);
+bdrv_drained_end(di->bs);
+}
+}
+
+/*
+ * Continue tracking for next incremental backup in d

[pve-devel] [PATCH common v5 06/32] syscall: expose fallocate syscall

2025-03-21 Thread Fiona Ebner
Signed-off-by: Fiona Ebner 
---

New in v5.

 src/PVE/Syscall.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/PVE/Syscall.pm b/src/PVE/Syscall.pm
index 9ef3d5d..f3193a3 100644
--- a/src/PVE/Syscall.pm
+++ b/src/PVE/Syscall.pm
@@ -19,6 +19,7 @@ BEGIN {
mknod => &SYS_mknod,
faccessat => &SYS_faccessat,
setresuid => &SYS_setresuid,
+   fallocate => &SYS_fallocate,
fchownat => &SYS_fchownat,
mount => &SYS_mount,
renameat2 => &SYS_renameat2,
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox 1/2] notify: webhook: gotify: set Content-Length header

2025-03-21 Thread Lukas Wagner
To quote from RFC 9110 [1]:

  A user agent SHOULD send Content-Length in a request when
  the method defines a meaning for enclosed content and it
  is not sending Transfer-Encoding. For example, a user agent
  normally sends Content-Length in a POST request even when
  the value is 0 (indicating empty content).
  A user agent SHOULD NOT send a Content-Length header field
  when the request message does not contain content and the
  method semantics do not anticipate such data.

It seemed like our HTTP client lib did not set the header
automatically, which is why we should do it manually.

While most services seemed to have worked fine without setting
the header, some Microsoft services seem to require it
to accept the webhook request [2].

[1] https://datatracker.ietf.org/doc/html/rfc9110#name-content-length
[2] https://forum.proxmox.com/threads/158827

Signed-off-by: Lukas Wagner 
---
 proxmox-notify/src/endpoints/gotify.rs  |  4 
 proxmox-notify/src/endpoints/webhook.rs | 19 ++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/proxmox-notify/src/endpoints/gotify.rs 
b/proxmox-notify/src/endpoints/gotify.rs
index 3e977131..e154daab 100644
--- a/proxmox-notify/src/endpoints/gotify.rs
+++ b/proxmox-notify/src/endpoints/gotify.rs
@@ -136,6 +136,10 @@ impl Endpoint for GotifyEndpoint {
 format!("Bearer {}", self.private_config.token),
 ),
 ("X-Gotify-Key".into(), self.private_config.token.clone()),
+(
+http::header::CONTENT_LENGTH.to_string(),
+body.len().to_string(),
+),
 ]);
 
 let proxy_config = context()
diff --git a/proxmox-notify/src/endpoints/webhook.rs 
b/proxmox-notify/src/endpoints/webhook.rs
index 34dbac54..604777c7 100644
--- a/proxmox-notify/src/endpoints/webhook.rs
+++ b/proxmox-notify/src/endpoints/webhook.rs
@@ -35,7 +35,7 @@ pub(crate) const WEBHOOK_TYPENAME: &str = "webhook";
 const HTTP_TIMEOUT: Duration = Duration::from_secs(10);
 
 #[api]
-#[derive(Serialize, Deserialize, Clone, Copy, Default)]
+#[derive(Serialize, Deserialize, Clone, Copy, Default, PartialEq)]
 #[serde(rename_all = "kebab-case")]
 /// HTTP Method to use.
 pub enum HttpMethod {
@@ -347,6 +347,23 @@ impl WebhookEndpoint {
 builder = builder.header(header.name.clone(), value);
 }
 
+// From 
https://datatracker.ietf.org/doc/html/rfc9110#name-content-length :
+//
+// A user agent SHOULD send Content-Length in a request when the method
+// defines a meaning for enclosed content and it is not sending
+// Transfer-Encoding. For example, a user agent normally sends
+// Content-Length in a POST request even when the value is 0 
(indicating
+// empty content). A user agent SHOULD NOT send a Content-Length header
+// field when the request message does not contain content and the
+// method semantics do not anticipate such data.
+//
+// -> send the header always, unless we do a get with no body (which 
is the expected case
+// for GET)
+let content_length = body.as_bytes().len();
+if !(self.config.method == HttpMethod::Get && content_length == 0) {
+builder = builder.header(http::header::CONTENT_LENGTH, 
content_length);
+}
+
 let request = builder
 .body(body)
 .map_err(|err| self.mask_secret_in_error(err))
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel