[pve-devel] [PATCH pve-kernel] fix #5558: cherry-pick NFSv4 fix

2024-07-11 Thread Fabian Grünbichler
picked from v6.9.8, the bug can cause lost NFS connections according to
upstream, and possibly corrupt backups according to our user report.

Signed-off-by: Fabian Grünbichler 
---
numbered after Fiona's two cherry-picks already on the list, assuming those
will all be applied in one go ;)

 ...0-SUNRPC-Fix-backchannel-reply-again.patch | 58 +++
 1 file changed, 58 insertions(+)
 create mode 100644 patches/kernel/0020-SUNRPC-Fix-backchannel-reply-again.patch

diff --git a/patches/kernel/0020-SUNRPC-Fix-backchannel-reply-again.patch 
b/patches/kernel/0020-SUNRPC-Fix-backchannel-reply-again.patch
new file mode 100644
index 000..7fe2703
--- /dev/null
+++ b/patches/kernel/0020-SUNRPC-Fix-backchannel-reply-again.patch
@@ -0,0 +1,58 @@
+From  Mon Sep 17 00:00:00 2001
+From: Chuck Lever 
+Date: Wed, 19 Jun 2024 09:51:08 -0400
+Subject: [PATCH] SUNRPC: Fix backchannel reply, again
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+[ Upstream commit 6ddc9deacc1312762c2edd9de00ce76b00f69f7c ]
+
+I still see "RPC: Could not send backchannel reply error: -110"
+quite often, along with slow-running tests. Debugging shows that the
+backchannel is still stumbling when it has to queue a callback reply
+on a busy transport.
+
+Note that every one of these timeouts causes a connection loss by
+virtue of the xprt_conditional_disconnect() call in that arm of
+call_cb_transmit_status().
+
+I found that setting to_maxval is necessary to get the RPC timeout
+logic to behave whenever to_exponential is not set.
+
+Fixes: 57331a59ac0d ("NFSv4.1: Use the nfs_client's rpc timeouts for 
backchannel")
+Signed-off-by: Chuck Lever 
+Reviewed-by: Benjamin Coddington 
+Signed-off-by: Trond Myklebust 
+Signed-off-by: Sasha Levin 
+(cherry picked from commit bd1e42e0f2567c911d3df761cf7a33b021fdceeb)
+Signed-off-by: Fabian Grünbichler 
+---
+ net/sunrpc/svc.c | 5 -
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index b969e505c7b7..5fc974dda811 100644
+--- a/net/sunrpc/svc.c
 b/net/sunrpc/svc.c
+@@ -1548,9 +1548,11 @@ void svc_process(struct svc_rqst *rqstp)
+  */
+ void svc_process_bc(struct rpc_rqst *req, struct svc_rqst *rqstp)
+ {
++  struct rpc_timeout timeout = {
++  .to_increment   = 0,
++  };
+   struct rpc_task *task;
+   int proc_error;
+-  struct rpc_timeout timeout;
+ 
+   /* Build the svc_rqst used by the common processing routine */
+   rqstp->rq_xid = req->rq_xid;
+@@ -1603,6 +1605,7 @@ void svc_process_bc(struct rpc_rqst *req, struct 
svc_rqst *rqstp)
+   timeout.to_initval = req->rq_xprt->timeout->to_initval;
+   timeout.to_retries = req->rq_xprt->timeout->to_retries;
+   }
++  timeout.to_maxval = timeout.to_initval;
+   memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf));
+   task = rpc_run_bc_task(req, &timeout);
+ 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 pve-docs] vzdump: add section describing PBS change detection mode

2024-07-11 Thread Aaron Lauterer
could we have a short paragraph on where to set these options? Similar 
to how it is done in the fleecing section?


a few very nitpicky comments inline:

On  2024-07-10  16:41, Christian Ebner wrote:

Add a concise section about what PBS change detection mode is and
what it affects, including a table with a description of the modes.

Signed-off-by: Christian Ebner 
---
  vzdump.adoc | 24 
  1 file changed, 24 insertions(+)

diff --git a/vzdump.adoc b/vzdump.adoc
index 9b1cb61..4607e40 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -175,6 +175,30 @@ fleecing image up-front. On a thinly provisioned storage, 
the fleecing image can
  grow to the same size as the original image only if the guest re-writes a 
whole
  disk while the backup is busy with another disk.
  
+CT Change Detection Mode

+
+
+Setting the change detection mode defines the encoding format for the pxar
+archives and how changed and unchanged files are handled for container backups


s/how changed and unchanged/how changed, and unchanged/

comma before the and


+with Proxmox Backup Server as target.

s/as target/as the target/


+
+Backups of VMs or to other storage backends are not affected by this setting.
+
+There are 3 change detection modes available:
+
+[width="100%",cols="

s/use metadata/use the metadata/
s/of previous/of the previous/


+unchanged files, and reuse their data chunks without reading file contents from
+disk, whenever possible.
+|===
+
  Backup File Names
  -
  



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 pve-docs] vzdump: add section describing PBS change detection mode

2024-07-11 Thread Christian Ebner

On 7/11/24 10:24, Aaron Lauterer wrote:
could we have a short paragraph on where to set these options? Similar 
to how it is done in the fleecing section?


Sure, I agree as it is not obvious where to find the configuration 
option. Will send a v3 incorporating the suggestions.


Thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 pve-docs] vzdump: add section describing PBS change detection mode

2024-07-11 Thread Christian Ebner
Add a concise section about what PBS change detection mode is, where
to configure the option and what it affects, including a table with a
description of the modes.

Signed-off-by: Christian Ebner 
---
changes since version 2 (thanks @Aaron for the suggestions!):
- Incorporated suggested paragraph describing how to set the options
- Added missing `the` and comma
- Moved sentence of VMs and unaffected storage target to the end and
  show it as note

 vzdump.adoc | 32 
 1 file changed, 32 insertions(+)

diff --git a/vzdump.adoc b/vzdump.adoc
index 9b1cb61..508ff5d 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -175,6 +175,38 @@ fleecing image up-front. On a thinly provisioned storage, 
the fleecing image can
 grow to the same size as the original image only if the guest re-writes a whole
 disk while the backup is busy with another disk.
 
+CT Change Detection Mode
+
+
+Setting the change detection mode defines the encoding format for the pxar
+archives and how changed, and unchanged files are handled for container backups
+with Proxmox Backup Server as the target.
+
+The change detection mode option can be configured for individual backup jobs 
in
+the 'Advanced' tab while editing a job. Further, this option can be set as
+node-wide fallback via the xref:vzdump_configuration[configuration options].
+
+There are 3 change detection modes available:
+
+[width="100%",cols="https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 pve-docs] vzdump: add section describing PBS change detection mode

2024-07-11 Thread Christian Ebner

superseded-by version 3:
https://lists.proxmox.com/pipermail/pve-devel/2024-July/064617.html


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer v2 0/3] add check/rename for already-existing ZFS rpool

2024-07-11 Thread Christoph Heiss
Pretty straight forward overall, implements a check for an existing
`rpool` on the system and ask the user whether they would like to rename
it, much in the same way as it works for VGs already.

Without this, the installer would silently create a second (and thus
conflicting) `rpool` and cause a boot failure after the installation,
since it does not know which pool to import exactly.

v1: https://lists.proxmox.com/pipermail/pve-devel/2024-May/063874.html

Notable changes v1 -> v2:
  * incorporated Aaron suggestions from v1
  * rewrote tests to use a pre-defined input instead
  * moved pool renaming to own subroutine
  * documented all new subroutines
  * split out tests into own patch

Christoph Heiss (3):
  proxmox: add zfs module for retrieving importable zpool info
  test: add test cases for new zfs module
  low-level: install: check for already-existing `rpool` on install

 Proxmox/Install.pm|  30 +++
 Proxmox/Makefile  |   1 +
 Proxmox/Sys/ZFS.pm| 109 ++
 test/Makefile |   7 ++-
 test/zfs-get-pool-list.pl |  57 
 5 files changed, 203 insertions(+), 1 deletion(-)
 create mode 100644 Proxmox/Sys/ZFS.pm
 create mode 100755 test/zfs-get-pool-list.pl

-- 
2.44.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer v2 2/3] test: add test cases for new zfs module

2024-07-11 Thread Christoph Heiss
Signed-off-by: Christoph Heiss 
---
Changes v1 -> v2:
  * new patch, split out from patch #1
  * rewrote tests to use a pre-defined input instead, thus being able
to enable the tests unconditionally

 test/Makefile |  7 -
 test/zfs-get-pool-list.pl | 57 +++
 2 files changed, 63 insertions(+), 1 deletion(-)
 create mode 100755 test/zfs-get-pool-list.pl

diff --git a/test/Makefile b/test/Makefile
index 99bf14e..c473af8 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -3,12 +3,13 @@ all:
 export PERLLIB=..
 
 .PHONY: check
-check: test-zfs-arc-max test-run-command test-parse-fqdn test-ui2-stdio
+check: test-zfs-arc-max test-run-command test-parse-fqdn test-ui2-stdio 
test-zfs-get-pool-list
 
 .PHONY: test-zfs-arc-max
 test-zfs-arc-max:
./zfs-arc-max.pl
 
+.PHONY: test-run-command
 test-run-command:
./run-command.pl
 
@@ -19,3 +20,7 @@ test-parse-fqdn:
 .PHONY: test-ui2-stdio
 test-ui2-stdio:
./ui2-stdio.pl
+
+.PHONY: test-zfs-get-pool-list
+test-zfs-get-pool-list:
+   ./zfs-get-pool-list.pl
diff --git a/test/zfs-get-pool-list.pl b/test/zfs-get-pool-list.pl
new file mode 100755
index 000..34e6b20
--- /dev/null
+++ b/test/zfs-get-pool-list.pl
@@ -0,0 +1,57 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use File::Temp;
+use Test::More tests => 8;
+
+use Proxmox::Sys::ZFS;
+use Proxmox::UI;
+
+my $log_file = File::Temp->new();
+Proxmox::Log::init($log_file->filename);
+
+Proxmox::UI::init_stdio();
+
+our $ZPOOL_IMPORT_TEST_OUTPUT =  { id => '4958685680270539150', state => 'ONLINE' },
+rpool => { id => '9412322616744093413', state => 'FAULTED' },
+};
+
+open(my $fh, '<', \$ZPOOL_IMPORT_TEST_OUTPUT);
+my $result = Proxmox::Sys::ZFS::zpool_import_parse_output($fh);
+while (my ($name, $info) = each %$pools) {
+my ($p) = grep { $_->{name} eq $name } @$result;
+ok(defined($p), "pool $name was found");
+is($p->{id}, $info->{id}, "pool $name has correct id");
+is($p->{state}, $info->{state}, "pool $name has correct state");
+like($p->{action}, qr/^The pool can be imported using its name or numeric 
identifier/,
+   "pool $name can be imported");
+}
-- 
2.45.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer v2 1/3] proxmox: add zfs module for retrieving importable zpool info

2024-07-11 Thread Christoph Heiss
Signed-off-by: Christoph Heiss 
---
Changes v1 -> v2:
  * incorporated Aaron suggestion to use anonymous arrays instead
  * added documentation
  * renamed parsing function parse_pool_list -> zpool_import_parse_output
  * split out tests into separate patch

 Proxmox/Makefile   |  1 +
 Proxmox/Sys/ZFS.pm | 93 ++
 2 files changed, 94 insertions(+)
 create mode 100644 Proxmox/Sys/ZFS.pm

diff --git a/Proxmox/Makefile b/Proxmox/Makefile
index 9561d9b..035626b 100644
--- a/Proxmox/Makefile
+++ b/Proxmox/Makefile
@@ -17,6 +17,7 @@ PERL_MODULES=\
 Sys/File.pm \
 Sys/Net.pm \
 Sys/Udev.pm \
+Sys/ZFS.pm \
 UI.pm \
 UI/Base.pm \
 UI/Gtk3.pm \
diff --git a/Proxmox/Sys/ZFS.pm b/Proxmox/Sys/ZFS.pm
new file mode 100644
index 000..9232d1a
--- /dev/null
+++ b/Proxmox/Sys/ZFS.pm
@@ -0,0 +1,93 @@
+package Proxmox::Sys::ZFS;
+
+use strict;
+use warnings;
+
+use Proxmox::Sys::Command qw(run_command);
+
+use base qw(Exporter);
+our @EXPORT_OK = qw(get_exported_pools);
+
+# Parses the output of running `zpool import`, which shows all importable
+# ZFS pools.
+# Unfortunately, `zpool` does not support JSON or any other machine-readable
+# format for output, so we have to do it the hard way.
+#
+# Example output of `zpool import` looks like this:
+#
+#   root@proxmox:/# zpool import
+#  pool: testpool
+#id: 4958685680270539150
+# state: ONLINE
+#action: The pool can be imported using its name or numeric identifier.
+#config:
+#
+#   testpoolONLINE
+# vdc   ONLINE
+# vdd   ONLINE
+#
+#  pool: rpool
+#id: 9412322616744093413
+# state: ONLINE
+#   status: The pool was last accessed by another system.
+#action: The pool can be imported using its name or numeric identifier and
+#   the '-f' flag.
+#  see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
+#config:
+#
+#   rpool   ONLINE
+# mirror-0  ONLINE
+#   vda3ONLINE
+#   vdb3ONLINE
+#
+sub zpool_import_parse_output {
+my ($fh) = @_;
+
+my $pools = []; # all found pools
+my $pool = {}; # last found pool in output
+
+while (my $line = <$fh>) {
+   # first, if we find the start of a new pool, add it to the list with
+   # its name
+   if ($line =~ /^\s+pool: (.+)$/) {
+   # push the previous parsed pool to the result list
+   push @$pools, $pool if %$pool;
+   $pool = { name => $1 };
+   next;
+   }
+
+   # ignore any (garbage) output before the actual list, just in case
+   next if !%$pool;
+
+   # add any possibly-useful attribute to the last (aka. current) pool
+   if ($line =~ /^\s*(id|state|status|action): (.+)$/) {
+   chomp($pool->{$1} = $2);
+   next;
+   }
+}
+
+# add the final parsed pool to the list
+push @$pools, $pool if %$pool;
+return $pools;
+}
+
+# Returns an array of all importable ZFS pools on the system.
+# Each entry is a hash of the format:
+#
+# {
+# name => '',
+# id => ,
+# /* see also zpool_state_to_name() in lib/libzfs/libzfs_pool.c /*
+# state => 'OFFLINE' | 'REMOVED' | 'FAULTED' | 'SPLIT' | 'UNAVAIL' \
+#  | 'FAULTED' | 'DEGRADED' | 'ONLINE',
+# status => '', optional
+# action => '', optional
+# }
+sub get_exported_pools {
+my $raw = run_command(['zpool', 'import']);
+open (my $fh, '<', \$raw) or die 'failed to open in-memory stream';
+
+return zpool_import_parse_output($fh);
+}
+
+1;
-- 
2.45.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer v2 3/3] low-level: install: check for already-existing `rpool` on install

2024-07-11 Thread Christoph Heiss
.. much in the same manner as the detection for LVM works.

zpools can only be renamed by importing them with a new name, so
unfortunately the import-export dance is needed.

Signed-off-by: Christoph Heiss 
---
Changes v1 -> v2:
  * use unique pool id for renaming instead of generating random id
ourselves
  * moved renaming functionality to own subroutine in new
Proxmox::Sys::ZFS module

 Proxmox/Install.pm | 30 ++
 Proxmox/Sys/ZFS.pm | 20 ++--
 2 files changed, 48 insertions(+), 2 deletions(-)

diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
index c0f8955..2517c24 100644
--- a/Proxmox/Install.pm
+++ b/Proxmox/Install.pm
@@ -16,6 +16,7 @@ use Proxmox::Install::StorageConfig;
 use Proxmox::Sys::Block qw(get_cached_disks wipe_disk partition_bootable_disk);
 use Proxmox::Sys::Command qw(run_command syscmd);
 use Proxmox::Sys::File qw(file_read_firstline file_write_all);
+use Proxmox::Sys::ZFS;
 use Proxmox::UI;
 
 # TODO: move somewhere better?
@@ -169,9 +170,38 @@ sub btrfs_create {
 syscmd($cmd);
 }
 
+sub zfs_ask_existing_zpool_rename {
+my ($pool_name) = @_;
+
+# At this point, no pools should be imported/active
+my $exported_pools = Proxmox::Sys::ZFS::get_exported_pools();
+
+foreach (@$exported_pools) {
+   next if $_->{name} ne $pool_name || $_->{state} ne 'ONLINE';
+   my $renamed_pool = "$_->{name}-OLD-$_->{id}";
+
+   my $response_ok = Proxmox::UI::prompt(
+   "A ZFS pool named '$_->{name}' (id $_->{id}) already exists on the 
system.\n\n" .
+   "Do you want to rename the pool to '$renamed_pool' before 
continuing " .
+   "or cancel the installation?"
+   );
+
+   # Import the pool using its id, as that is unique and works even if 
there are
+   # multiple zpools with the same name.
+   if ($response_ok) {
+   Proxmox::Sys::ZFS::rename_pool($_->{id}, $renamed_pool);
+   } else {
+   warn "Canceled installation as requested by user, due to already 
existing ZFS pool '$pool_name'\n";
+   die "\n"; # causes abort without re-showing an error dialogue
+   }
+}
+}
+
 sub zfs_create_rpool {
 my ($vdev, $pool_name, $root_volume_name) = @_;
 
+zfs_ask_existing_zpool_rename($pool_name);
+
 my $iso_env = Proxmox::Install::ISOEnv::get();
 
 my $zfs_opts = Proxmox::Install::Config::get_zfs_opt();
diff --git a/Proxmox/Sys/ZFS.pm b/Proxmox/Sys/ZFS.pm
index 9232d1a..c5a807e 100644
--- a/Proxmox/Sys/ZFS.pm
+++ b/Proxmox/Sys/ZFS.pm
@@ -3,10 +3,10 @@ package Proxmox::Sys::ZFS;
 use strict;
 use warnings;
 
-use Proxmox::Sys::Command qw(run_command);
+use Proxmox::Sys::Command qw(run_command syscmd);
 
 use base qw(Exporter);
-our @EXPORT_OK = qw(get_exported_pools);
+our @EXPORT_OK = qw(get_exported_pools rename_pool);
 
 # Parses the output of running `zpool import`, which shows all importable
 # ZFS pools.
@@ -90,4 +90,20 @@ sub get_exported_pools {
 return zpool_import_parse_output($fh);
 }
 
+# Renames a importable ZFS pool by importing it with a new name and then
+# exporting again.
+#
+# Arguments:
+#
+# $poolid - Unique, numeric identifier of the pool to rename
+# $new_name - New name for the pool
+sub rename_pool {
+my ($poolid, $new_name) = @_;
+
+syscmd("zpool import -f $poolid $new_name") == 0 ||
+   die "failed to import zfs pool with id '$poolid' with new name 
'$new_name'\n";
+syscmd("zpool export $new_name") == 0 ||
+   warn "failed to export renamed zfs pool '$new_name'\n";
+}
+
 1;
-- 
2.45.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v3 pve-docs] vzdump: add section describing PBS change detection mode

2024-07-11 Thread Aaron Lauterer




On  2024-07-11  11:37, Christian Ebner wrote:

Add a concise section about what PBS change detection mode is, where
to configure the option and what it affects, including a table with a
description of the modes.

Signed-off-by: Christian Ebner 
---
changes since version 2 (thanks @Aaron for the suggestions!):
- Incorporated suggested paragraph describing how to set the options
- Added missing `the` and comma
- Moved sentence of VMs and unaffected storage target to the end and
   show it as note

  vzdump.adoc | 32 
  1 file changed, 32 insertions(+)




applied with a small restructuring, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs] vzdump: fleecing: rephrase manual vzdump example

2024-07-11 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
@fiona please let me know if this works and doesn't introduce wrong
infos.
I think this way the sentence represents better what is happening, as
it is not enabling it, but running that particular backup with fleecing.

I am also thinking of placing the actual command of this and the now
applied change detection mode into a code block instead of having them
inline.

 vzdump.adoc | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/vzdump.adoc b/vzdump.adoc
index 415742e..10dff9e 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -149,11 +149,11 @@ With backup fleecing, such old data is cached in a 
fleecing image rather than
 sent directly to the backup target. This can help guest IO performance and even
 prevent hangs in certain scenarios, at the cost of requiring more storage 
space.
 
-Use e.g. `vzdump 123 --fleecing enabled=1,storage=local-lvm` to enable backup
-fleecing, with fleecing images created on the storage `local-lvm`. As always,
-you can set the option for specific backup jobs, or as a node-wide fallback via
-the xref:vzdump_configuration[configuration options]. In the UI, fleecing can 
be
-configured in the 'Advanced' tab when editing a backup job.
+To manually run a backup of guest `123` with fleecing images created on the
+storage `local-lvm`, run `vzdump 123 --fleecing enabled=1,storage=local-lvm`. 
As
+always, you can set the option for specific backup jobs, or as a node-wide
+fallback via the xref:vzdump_configuration[configuration options]. In the UI,
+ fleecing can be configured in the 'Advanced' tab when editing a backup job.
 
 The fleecing storage should be a fast local storage, with thin provisioning and
 discard support. Examples are LVM-thin, RBD, ZFS with `sparse 1` in the storage
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs] vzdump: fleecing: rephrase manual vzdump example

2024-07-11 Thread Fiona Ebner
Am 11.07.24 um 13:11 schrieb Aaron Lauterer:
> Signed-off-by: Aaron Lauterer 
> ---
> @fiona please let me know if this works and doesn't introduce wrong
> infos.
> I think this way the sentence represents better what is happening, as
> it is not enabling it, but running that particular backup with fleecing.
> 
> I am also thinking of placing the actual command of this and the now
> applied change detection mode into a code block instead of having them
> inline.
> 

Fine by me :)

>  vzdump.adoc | 10 +-
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/vzdump.adoc b/vzdump.adoc
> index 415742e..10dff9e 100644
> --- a/vzdump.adoc
> +++ b/vzdump.adoc
> @@ -149,11 +149,11 @@ With backup fleecing, such old data is cached in a 
> fleecing image rather than
>  sent directly to the backup target. This can help guest IO performance and 
> even
>  prevent hangs in certain scenarios, at the cost of requiring more storage 
> space.
>  
> -Use e.g. `vzdump 123 --fleecing enabled=1,storage=local-lvm` to enable backup
> -fleecing, with fleecing images created on the storage `local-lvm`. As always,
> -you can set the option for specific backup jobs, or as a node-wide fallback 
> via
> -the xref:vzdump_configuration[configuration options]. In the UI, fleecing 
> can be
> -configured in the 'Advanced' tab when editing a backup job.
> +To manually run a backup of guest `123` with fleecing images created on the

Fleecing is only supported for VMs, so I don't like the use of 'guest'
here, let's use 'VM' instead.

> +storage `local-lvm`, run `vzdump 123 --fleecing 
> enabled=1,storage=local-lvm`. As

The duplicate 'run' in the sentence can be fine, but maybe having the
first 'run' be 'start' instead reads slightly nicer?

> +always, you can set the option for specific backup jobs, or as a node-wide
> +fallback via the xref:vzdump_configuration[configuration options]. In the UI,
> + fleecing can be configured in the 'Advanced' tab when editing a backup job.
>  
>  The fleecing storage should be a fast local storage, with thin provisioning 
> and
>  discard support. Examples are LVM-thin, RBD, ZFS with `sparse 1` in the 
> storage


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs] vzdump: fleecing: rephrase manual vzdump example

2024-07-11 Thread Aaron Lauterer




On  2024-07-11  13:23, Fiona Ebner wrote:

Am 11.07.24 um 13:11 schrieb Aaron Lauterer:

Signed-off-by: Aaron Lauterer 
---
@fiona please let me know if this works and doesn't introduce wrong
infos.
I think this way the sentence represents better what is happening, as
it is not enabling it, but running that particular backup with fleecing.

I am also thinking of placing the actual command of this and the now
applied change detection mode into a code block instead of having them
inline.



Fine by me :)


  vzdump.adoc | 10 +-
  1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/vzdump.adoc b/vzdump.adoc
index 415742e..10dff9e 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -149,11 +149,11 @@ With backup fleecing, such old data is cached in a 
fleecing image rather than
  sent directly to the backup target. This can help guest IO performance and 
even
  prevent hangs in certain scenarios, at the cost of requiring more storage 
space.
  
-Use e.g. `vzdump 123 --fleecing enabled=1,storage=local-lvm` to enable backup

-fleecing, with fleecing images created on the storage `local-lvm`. As always,
-you can set the option for specific backup jobs, or as a node-wide fallback via
-the xref:vzdump_configuration[configuration options]. In the UI, fleecing can 
be
-configured in the 'Advanced' tab when editing a backup job.
+To manually run a backup of guest `123` with fleecing images created on the


Fleecing is only supported for VMs, so I don't like the use of 'guest'
here, let's use 'VM' instead.


good point, will change it to VM



+storage `local-lvm`, run `vzdump 123 --fleecing enabled=1,storage=local-lvm`. 
As


The duplicate 'run' in the sentence can be fine, but maybe having the
first 'run' be 'start' instead reads slightly nicer?


good catch :)




+always, you can set the option for specific backup jobs, or as a node-wide
+fallback via the xref:vzdump_configuration[configuration options]. In the UI,
+ fleecing can be configured in the 'Advanced' tab when editing a backup job.
  
  The fleecing storage should be a fast local storage, with thin provisioning and

  discard support. Examples are LVM-thin, RBD, ZFS with `sparse 1` in the 
storage



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer v2 0/3] add check/rename for already-existing ZFS rpool

2024-07-11 Thread Aaron Lauterer

gave it a test on a VM with the GUI installer.

consider this series:

Tested-By: Aaron Lauterer 
Reviewed-By: Aaron Lauterer 

On  2024-07-11  11:56, Christoph Heiss wrote:

Pretty straight forward overall, implements a check for an existing
`rpool` on the system and ask the user whether they would like to rename
it, much in the same way as it works for VGs already.

Without this, the installer would silently create a second (and thus
conflicting) `rpool` and cause a boot failure after the installation,
since it does not know which pool to import exactly.

v1: https://lists.proxmox.com/pipermail/pve-devel/2024-May/063874.html

Notable changes v1 -> v2:
   * incorporated Aaron suggestions from v1
   * rewrote tests to use a pre-defined input instead
   * moved pool renaming to own subroutine
   * documented all new subroutines
   * split out tests into own patch

Christoph Heiss (3):
   proxmox: add zfs module for retrieving importable zpool info
   test: add test cases for new zfs module
   low-level: install: check for already-existing `rpool` on install

  Proxmox/Install.pm|  30 +++
  Proxmox/Makefile  |   1 +
  Proxmox/Sys/ZFS.pm| 109 ++
  test/Makefile |   7 ++-
  test/zfs-get-pool-list.pl |  57 
  5 files changed, 203 insertions(+), 1 deletion(-)
  create mode 100644 Proxmox/Sys/ZFS.pm
  create mode 100755 test/zfs-get-pool-list.pl




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH docs v2] vzdump: fleecing: rephrase manual vzdump example

2024-07-11 Thread Aaron Lauterer
and move the example command from inline to a block

Signed-off-by: Aaron Lauterer 
---
 vzdump.adoc | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/vzdump.adoc b/vzdump.adoc
index 415742e..733704b 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -149,11 +149,14 @@ With backup fleecing, such old data is cached in a 
fleecing image rather than
 sent directly to the backup target. This can help guest IO performance and even
 prevent hangs in certain scenarios, at the cost of requiring more storage 
space.
 
-Use e.g. `vzdump 123 --fleecing enabled=1,storage=local-lvm` to enable backup
-fleecing, with fleecing images created on the storage `local-lvm`. As always,
-you can set the option for specific backup jobs, or as a node-wide fallback via
-the xref:vzdump_configuration[configuration options]. In the UI, fleecing can 
be
-configured in the 'Advanced' tab when editing a backup job.
+To manually start a backup of VM `123` with fleecing images created on the
+storage `local-lvm`, run
+
+ vzdump 123 --fleecing enabled=1,storage=local-lvm
+
+As always, you can set the option for specific backup jobs, or as a node-wide
+fallback via the xref:vzdump_configuration[configuration options]. In the UI,
+fleecing can be configured in the 'Advanced' tab when editing a backup job.
 
 The fleecing storage should be a fast local storage, with thin provisioning and
 discard support. Examples are LVM-thin, RBD, ZFS with `sparse 1` in the storage
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH docs] vzdump: change mode detection: move example cmd to block

2024-07-11 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
 vzdump.adoc | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/vzdump.adoc b/vzdump.adoc
index 733704b..7625d08 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -205,7 +205,8 @@ contents from disk, whenever possible.
 
|===
 
 To perform a backup using the change detecation mode `metadata` you can run
-`vzdump 123 --storage pbs-storage --pbs-change-detection-mode metadata`.
+
+ vzdump 123 --storage pbs-storage --pbs-change-detection-mode metadata
 
 NOTE: Backups of VMs or to storage backends other than Proxmox Backup Server 
are
 not affected by this setting.
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH access-control] api: ACL update: fix handling of Permissions.Modify

2024-07-11 Thread Fabian Grünbichler
with 8.x, the scope of non-"Permissions.Modify"-based ACL update privileges
were reduced (so that users with for example, VM.Allocate on a VM could only
delegate their own privileges, but not arbitrary other ones). that additional
logic had a wrong guard and was accidentally triggered for calls where the user
had the "Permissions.Modify" privilege on the modified ACL path, but without
propagation set.

a user with "Permissions.Modify" on a path should be able to set arbitrary
ACLs for that path, even without propagation.

reported on the forum:

https://forum.proxmox.com/threads/privilege-permissions-modify-on-pool-will-not-propagade-to-contained-vms-anymore.151032/

Fixes: 46bfd59dfca655b263d1f905be37d985416717ac ("acls: restrict 
less-privileged ACL modifications")

Signed-off-by: Fabian Grünbichler 
---
 src/PVE/API2/ACL.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/API2/ACL.pm b/src/PVE/API2/ACL.pm
index 93adb78..2a4d4ff 100644
--- a/src/PVE/API2/ACL.pm
+++ b/src/PVE/API2/ACL.pm
@@ -166,7 +166,8 @@ __PACKAGE__->register_method ({
die "role '$role' does not exist\n"
if !$cfg->{roles}->{$role};
 
-   if (!$auth_user_privs->{'Permissions.Modify'}) {
+   # permissions() returns set privs as key, and propagate bit 
as value!
+   if (!defined($auth_user_privs->{'Permissions.Modify'})) {
# 'perm-modify' allows /vms/* with VM.Allocate and 
similar restricted use cases
# filter those to only allow handing out a subset of 
currently active privs
my $role_privs = $cfg->{roles}->{$role};
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH installer v3 1/4] proxmox: add zfs module for retrieving importable zpool info

2024-07-11 Thread Christoph Heiss
Signed-off-by: Christoph Heiss 
---
Changes v2 -> v3:
  * no changes

Changes v1 -> v2:
  * incorporated Aarons suggestion to use anonymous arrays instead
  * added documentation
  * renamed parsing function parse_pool_list -> zpool_import_parse_output
  * split out tests into separate patch

 Proxmox/Makefile   |  1 +
 Proxmox/Sys/ZFS.pm | 93 ++
 2 files changed, 94 insertions(+)
 create mode 100644 Proxmox/Sys/ZFS.pm

diff --git a/Proxmox/Makefile b/Proxmox/Makefile
index 9561d9b..035626b 100644
--- a/Proxmox/Makefile
+++ b/Proxmox/Makefile
@@ -17,6 +17,7 @@ PERL_MODULES=\
 Sys/File.pm \
 Sys/Net.pm \
 Sys/Udev.pm \
+Sys/ZFS.pm \
 UI.pm \
 UI/Base.pm \
 UI/Gtk3.pm \
diff --git a/Proxmox/Sys/ZFS.pm b/Proxmox/Sys/ZFS.pm
new file mode 100644
index 000..9232d1a
--- /dev/null
+++ b/Proxmox/Sys/ZFS.pm
@@ -0,0 +1,93 @@
+package Proxmox::Sys::ZFS;
+
+use strict;
+use warnings;
+
+use Proxmox::Sys::Command qw(run_command);
+
+use base qw(Exporter);
+our @EXPORT_OK = qw(get_exported_pools);
+
+# Parses the output of running `zpool import`, which shows all importable
+# ZFS pools.
+# Unfortunately, `zpool` does not support JSON or any other machine-readable
+# format for output, so we have to do it the hard way.
+#
+# Example output of `zpool import` looks like this:
+#
+#   root@proxmox:/# zpool import
+#  pool: testpool
+#id: 4958685680270539150
+# state: ONLINE
+#action: The pool can be imported using its name or numeric identifier.
+#config:
+#
+#   testpoolONLINE
+# vdc   ONLINE
+# vdd   ONLINE
+#
+#  pool: rpool
+#id: 9412322616744093413
+# state: ONLINE
+#   status: The pool was last accessed by another system.
+#action: The pool can be imported using its name or numeric identifier and
+#   the '-f' flag.
+#  see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
+#config:
+#
+#   rpool   ONLINE
+# mirror-0  ONLINE
+#   vda3ONLINE
+#   vdb3ONLINE
+#
+sub zpool_import_parse_output {
+my ($fh) = @_;
+
+my $pools = []; # all found pools
+my $pool = {}; # last found pool in output
+
+while (my $line = <$fh>) {
+   # first, if we find the start of a new pool, add it to the list with
+   # its name
+   if ($line =~ /^\s+pool: (.+)$/) {
+   # push the previous parsed pool to the result list
+   push @$pools, $pool if %$pool;
+   $pool = { name => $1 };
+   next;
+   }
+
+   # ignore any (garbage) output before the actual list, just in case
+   next if !%$pool;
+
+   # add any possibly-useful attribute to the last (aka. current) pool
+   if ($line =~ /^\s*(id|state|status|action): (.+)$/) {
+   chomp($pool->{$1} = $2);
+   next;
+   }
+}
+
+# add the final parsed pool to the list
+push @$pools, $pool if %$pool;
+return $pools;
+}
+
+# Returns an array of all importable ZFS pools on the system.
+# Each entry is a hash of the format:
+#
+# {
+# name => '',
+# id => ,
+# /* see also zpool_state_to_name() in lib/libzfs/libzfs_pool.c /*
+# state => 'OFFLINE' | 'REMOVED' | 'FAULTED' | 'SPLIT' | 'UNAVAIL' \
+#  | 'FAULTED' | 'DEGRADED' | 'ONLINE',
+# status => '', optional
+# action => '', optional
+# }
+sub get_exported_pools {
+my $raw = run_command(['zpool', 'import']);
+open (my $fh, '<', \$raw) or die 'failed to open in-memory stream';
+
+return zpool_import_parse_output($fh);
+}
+
+1;
-- 
2.45.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer v3 2/4] test: add test cases for new zfs module

2024-07-11 Thread Christoph Heiss
Signed-off-by: Christoph Heiss 
---
Changes v2 -> v3:
  * no changes

Changes v1 -> v2:
  * new patch, split out from patch #1
  * rewrote tests to use a pre-defined input instead, thus being able
to enable the tests unconditionally

 test/Makefile |  7 -
 test/zfs-get-pool-list.pl | 57 +++
 2 files changed, 63 insertions(+), 1 deletion(-)
 create mode 100755 test/zfs-get-pool-list.pl

diff --git a/test/Makefile b/test/Makefile
index 99bf14e..c473af8 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -3,12 +3,13 @@ all:
 export PERLLIB=..
 
 .PHONY: check
-check: test-zfs-arc-max test-run-command test-parse-fqdn test-ui2-stdio
+check: test-zfs-arc-max test-run-command test-parse-fqdn test-ui2-stdio 
test-zfs-get-pool-list
 
 .PHONY: test-zfs-arc-max
 test-zfs-arc-max:
./zfs-arc-max.pl
 
+.PHONY: test-run-command
 test-run-command:
./run-command.pl
 
@@ -19,3 +20,7 @@ test-parse-fqdn:
 .PHONY: test-ui2-stdio
 test-ui2-stdio:
./ui2-stdio.pl
+
+.PHONY: test-zfs-get-pool-list
+test-zfs-get-pool-list:
+   ./zfs-get-pool-list.pl
diff --git a/test/zfs-get-pool-list.pl b/test/zfs-get-pool-list.pl
new file mode 100755
index 000..34e6b20
--- /dev/null
+++ b/test/zfs-get-pool-list.pl
@@ -0,0 +1,57 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use File::Temp;
+use Test::More tests => 8;
+
+use Proxmox::Sys::ZFS;
+use Proxmox::UI;
+
+my $log_file = File::Temp->new();
+Proxmox::Log::init($log_file->filename);
+
+Proxmox::UI::init_stdio();
+
+our $ZPOOL_IMPORT_TEST_OUTPUT =  { id => '4958685680270539150', state => 'ONLINE' },
+rpool => { id => '9412322616744093413', state => 'FAULTED' },
+};
+
+open(my $fh, '<', \$ZPOOL_IMPORT_TEST_OUTPUT);
+my $result = Proxmox::Sys::ZFS::zpool_import_parse_output($fh);
+while (my ($name, $info) = each %$pools) {
+my ($p) = grep { $_->{name} eq $name } @$result;
+ok(defined($p), "pool $name was found");
+is($p->{id}, $info->{id}, "pool $name has correct id");
+is($p->{state}, $info->{state}, "pool $name has correct state");
+like($p->{action}, qr/^The pool can be imported using its name or numeric 
identifier/,
+   "pool $name can be imported");
+}
-- 
2.45.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer v3 4/4] low-level: install: check for already-existing `rpool` on install

2024-07-11 Thread Christoph Heiss
.. much in the same manner as the detection for LVM works.

zpools can only be renamed by importing them with a new name, so
unfortunately the import-export dance is needed.

Signed-off-by: Christoph Heiss 
---
Changes v2 -> v3:
  * skip rename prompt in auto-install mode

Changes v1 -> v2:
  * use unique pool id for renaming instead of generating random id
ourselves
  * moved renaming functionality to own subroutine in new
Proxmox::Sys::ZFS module

 Proxmox/Install.pm | 33 +
 Proxmox/Sys/ZFS.pm | 20 ++--
 2 files changed, 51 insertions(+), 2 deletions(-)

diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
index 7342e4b..ba699fa 100644
--- a/Proxmox/Install.pm
+++ b/Proxmox/Install.pm
@@ -16,6 +16,7 @@ use Proxmox::Install::StorageConfig;
 use Proxmox::Sys::Block qw(get_cached_disks wipe_disk partition_bootable_disk);
 use Proxmox::Sys::Command qw(run_command syscmd);
 use Proxmox::Sys::File qw(file_read_firstline file_write_all);
+use Proxmox::Sys::ZFS;
 use Proxmox::UI;
 
 # TODO: move somewhere better?
@@ -169,9 +170,41 @@ sub btrfs_create {
 syscmd($cmd);
 }
 
+sub zfs_ask_existing_zpool_rename {
+my ($pool_name) = @_;
+
+# At this point, no pools should be imported/active
+my $exported_pools = Proxmox::Sys::ZFS::get_exported_pools();
+
+foreach (@$exported_pools) {
+   next if $_->{name} ne $pool_name || $_->{state} ne 'ONLINE';
+   my $renamed_pool = "$_->{name}-OLD-$_->{id}";
+
+   my $response_ok = 
Proxmox::Install::Config::get_existing_storage_auto_rename();
+   if (!$response_ok) {
+   $response_ok = Proxmox::UI::prompt(
+   "A ZFS pool named '$_->{name}' (id $_->{id}) already exists on 
the system.\n\n" .
+   "Do you want to rename the pool to '$renamed_pool' before 
continuing " .
+   "or cancel the installation?"
+   );
+   }
+
+   # Import the pool using its id, as that is unique and works even if 
there are
+   # multiple zpools with the same name.
+   if ($response_ok) {
+   Proxmox::Sys::ZFS::rename_pool($_->{id}, $renamed_pool);
+   } else {
+   warn "Canceled installation as requested by user, due to already 
existing ZFS pool '$pool_name'\n";
+   die "\n"; # causes abort without re-showing an error dialogue
+   }
+}
+}
+
 sub zfs_create_rpool {
 my ($vdev, $pool_name, $root_volume_name) = @_;
 
+zfs_ask_existing_zpool_rename($pool_name);
+
 my $iso_env = Proxmox::Install::ISOEnv::get();
 
 my $zfs_opts = Proxmox::Install::Config::get_zfs_opt();
diff --git a/Proxmox/Sys/ZFS.pm b/Proxmox/Sys/ZFS.pm
index 9232d1a..c5a807e 100644
--- a/Proxmox/Sys/ZFS.pm
+++ b/Proxmox/Sys/ZFS.pm
@@ -3,10 +3,10 @@ package Proxmox::Sys::ZFS;
 use strict;
 use warnings;
 
-use Proxmox::Sys::Command qw(run_command);
+use Proxmox::Sys::Command qw(run_command syscmd);
 
 use base qw(Exporter);
-our @EXPORT_OK = qw(get_exported_pools);
+our @EXPORT_OK = qw(get_exported_pools rename_pool);
 
 # Parses the output of running `zpool import`, which shows all importable
 # ZFS pools.
@@ -90,4 +90,20 @@ sub get_exported_pools {
 return zpool_import_parse_output($fh);
 }
 
+# Renames a importable ZFS pool by importing it with a new name and then
+# exporting again.
+#
+# Arguments:
+#
+# $poolid - Unique, numeric identifier of the pool to rename
+# $new_name - New name for the pool
+sub rename_pool {
+my ($poolid, $new_name) = @_;
+
+syscmd("zpool import -f $poolid $new_name") == 0 ||
+   die "failed to import zfs pool with id '$poolid' with new name 
'$new_name'\n";
+syscmd("zpool export $new_name") == 0 ||
+   warn "failed to export renamed zfs pool '$new_name'\n";
+}
+
 1;
-- 
2.45.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer v3 3/4] install: config: rename option lvm_auto_rename -> existing_storage_auto_rename

2024-07-11 Thread Christoph Heiss
As this is an internal option for the low-level installer anyway, no
real functional changes here.

Signed-off-by: Christoph Heiss 
---
Changes v2 -> v3:
  * new patch

 Proxmox/Install.pm  | 2 +-
 Proxmox/Install/Config.pm   | 6 +++---
 proxmox-auto-installer/src/utils.rs | 2 +-
 .../tests/resources/parse_answer/disk_match.json| 2 +-
 .../tests/resources/parse_answer/disk_match_all.json| 2 +-
 .../tests/resources/parse_answer/disk_match_any.json| 2 +-
 .../tests/resources/parse_answer/minimal.json   | 2 +-
 .../tests/resources/parse_answer/nic_matching.json  | 2 +-
 .../tests/resources/parse_answer/specific_nic.json  | 2 +-
 .../tests/resources/parse_answer/zfs.json   | 2 +-
 proxmox-installer-common/src/setup.rs   | 2 +-
 proxmox-tui-installer/src/setup.rs  | 2 +-
 12 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
index c0f8955..7342e4b 100644
--- a/Proxmox/Install.pm
+++ b/Proxmox/Install.pm
@@ -384,7 +384,7 @@ sub ask_existing_vg_rename_or_abort {
$vg->{new_vgname} = "$vgname-OLD-$short_uid";
 }
 
-my $response_ok = Proxmox::Install::Config::get_lvm_auto_rename();
+my $response_ok = 
Proxmox::Install::Config::get_existing_storage_auto_rename();
 if (!$response_ok) {
my $message = "Detected existing '$vgname' Volume Group(s)! Do you want 
to:\n";
 
diff --git a/Proxmox/Install/Config.pm b/Proxmox/Install/Config.pm
index ecd8a74..e449039 100644
--- a/Proxmox/Install/Config.pm
+++ b/Proxmox/Install/Config.pm
@@ -82,7 +82,7 @@ my sub init_cfg {
# TODO: single disk selection config
target_hd => undef,
disk_selection => {},
-   lvm_auto_rename => 0,
+   existing_storage_auto_rename => 0,
 
# locale
country => $country,
@@ -244,7 +244,7 @@ sub get_dns { return get('dns'); }
 sub set_target_cmdline { set_key('target_cmdline', $_[0]); }
 sub get_target_cmdline { return get('target_cmdline'); }
 
-sub set_lvm_auto_rename { set_key('lvm_auto_rename', $_[0]); }
-sub get_lvm_auto_rename { return get('lvm_auto_rename'); }
+sub set_existing_storage_auto_rename { set_key('existing_storage_auto_rename', 
$_[0]); }
+sub get_existing_storage_auto_rename { return 
get('existing_storage_auto_rename'); }
 
 1;
diff --git a/proxmox-auto-installer/src/utils.rs 
b/proxmox-auto-installer/src/utils.rs
index 202ad41..cc47f5f 100644
--- a/proxmox-auto-installer/src/utils.rs
+++ b/proxmox-auto-installer/src/utils.rs
@@ -328,7 +328,7 @@ pub fn parse_answer(
 zfs_opts: None,
 target_hd: None,
 disk_selection: BTreeMap::new(),
-lvm_auto_rename: 1,
+existing_storage_auto_rename: 1,
 
 country: answer.global.country.clone(),
 timezone: answer.global.timezone.clone(),
diff --git 
a/proxmox-auto-installer/tests/resources/parse_answer/disk_match.json 
b/proxmox-auto-installer/tests/resources/parse_answer/disk_match.json
index 3a117b6..2618fd4 100644
--- a/proxmox-auto-installer/tests/resources/parse_answer/disk_match.json
+++ b/proxmox-auto-installer/tests/resources/parse_answer/disk_match.json
@@ -10,7 +10,7 @@
"8": "8",
"9": "9"
   },
-  "lvm_auto_rename": 1,
+  "existing_storage_auto_rename": 1,
   "filesys": "zfs (RAID10)",
   "gateway": "192.168.1.1",
   "hdsize": 223.57088470458984,
diff --git 
a/proxmox-auto-installer/tests/resources/parse_answer/disk_match_all.json 
b/proxmox-auto-installer/tests/resources/parse_answer/disk_match_all.json
index 5325fc3..6cfb96a 100644
--- a/proxmox-auto-installer/tests/resources/parse_answer/disk_match_all.json
+++ b/proxmox-auto-installer/tests/resources/parse_answer/disk_match_all.json
@@ -7,7 +7,7 @@
   "disk_selection": {
"9": "9"
   },
-  "lvm_auto_rename": 1,
+  "existing_storage_auto_rename": 1,
   "filesys": "zfs (RAID0)",
   "gateway": "192.168.1.1",
   "hdsize": 223.57088470458984,
diff --git 
a/proxmox-auto-installer/tests/resources/parse_answer/disk_match_any.json 
b/proxmox-auto-installer/tests/resources/parse_answer/disk_match_any.json
index 18e22d1..1921b34 100644
--- a/proxmox-auto-installer/tests/resources/parse_answer/disk_match_any.json
+++ b/proxmox-auto-installer/tests/resources/parse_answer/disk_match_any.json
@@ -14,7 +14,7 @@
"8": "8",
"9": "9"
   },
-  "lvm_auto_rename": 1,
+  "existing_storage_auto_rename": 1,
   "filesys": "zfs (RAID10)",
   "gateway": "192.168.1.1",
   "hdsize": 2980.820640563965,
diff --git a/proxmox-auto-installer/tests/resources/parse_answer/minimal.json 
b/proxmox-auto-installer/tests/resources/parse_answer/minimal.json
index bb72713..38112e4 100644
--- a/proxmox-auto-installer/tests/resources/parse_answer/minimal.json
+++ b/proxmox-auto-installer/tests/resources/parse_answer/minimal.json
@@ -7,7 +7,7 @@
   "filesy

[pve-devel] [PATCH installer v3 0/4] add check/rename for already-existing ZFS rpool

2024-07-11 Thread Christoph Heiss
Pretty straight forward overall, implements a check for an existing
`rpool` on the system and ask the user whether they would like to rename
it, much in the same way as it works for VGs already.

Without this, the installer would silently create a second (and thus
conflicting) `rpool` and cause a boot failure after the installation,
since it does not know which pool to import exactly.

v1: https://lists.proxmox.com/pipermail/pve-devel/2024-May/063874.html
v2: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064619.html

Notable changes v2 -> v3:
  * make low-level option lvm_auto_rename more generic
  * skip rename prompt in auto-install mode

Notable changes v1 -> v2:
  * incorporated Aarons suggestions from v1
  * rewrote tests to use a pre-defined input instead
  * moved pool renaming to own subroutine
  * documented all new subroutines
  * split out tests into own patch

Christoph Heiss (4):
  test: add test cases for new zfs module
  low-level: install: check for already-existing `rpool` on install
  install: config: rename option lvm_auto_rename ->
existing_storage_auto_rename
  low-level: install: automatically rename existing rpools on
auto-installs

 Proxmox/Install.pm| 35 +++-
 Proxmox/Install/Config.pm |  6 +-
 Proxmox/Sys/ZFS.pm| 20 ++-
 proxmox-auto-installer/src/utils.rs   |  2 +-
 .../resources/parse_answer/disk_match.json|  2 +-
 .../parse_answer/disk_match_all.json  |  2 +-
 .../parse_answer/disk_match_any.json  |  2 +-
 .../tests/resources/parse_answer/minimal.json |  2 +-
 .../resources/parse_answer/nic_matching.json  |  2 +-
 .../resources/parse_answer/specific_nic.json  |  2 +-
 .../tests/resources/parse_answer/zfs.json |  2 +-
 proxmox-installer-common/src/setup.rs |  2 +-
 proxmox-tui-installer/src/setup.rs|  2 +-
 test/Makefile |  7 ++-
 test/zfs-get-pool-list.pl | 57 +++
 15 files changed, 128 insertions(+), 17 deletions(-)
 create mode 100755 test/zfs-get-pool-list.pl

-- 
2.44.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer 0/3] add check/rename for already-existing ZFS rpool

2024-07-11 Thread Christoph Heiss
v3 (already) out: 
https://lists.proxmox.com/pipermail/pve-devel/2024-July/064635.html

On Thu, May 16, 2024 at 12:28:32PM GMT, Christoph Heiss wrote:
> Pretty straight forward overall, implements a check for an exising
> `rpool` on the system and ask the user whether they would like to rename
> it, much in the same way as it works for VGs already.
>
> Without this, the installer would silently create a second (and thus
> conflicting) `rpool` and cause a boot failure after the installation,
> since it does not know which pool to import exactly.
>
> Christoph Heiss (3):
>   proxmox: add zfs module for retrieving importable zpool info
>   low-level: install: split out random disk uid generation
>   low-level: install: check for already-existing `rpool` on install
>
>  Proxmox/Install.pm| 47 -
>  Proxmox/Makefile  |  1 +
>  Proxmox/Sys/ZFS.pm| 43 ++
>  test/Makefile |  6 +
>  test/zfs-get-pool-list.pl | 49 +++
>  5 files changed, 140 insertions(+), 6 deletions(-)
>  create mode 100644 Proxmox/Sys/ZFS.pm
>  create mode 100755 test/zfs-get-pool-list.pl
>
> --
> 2.44.0
>
>
>
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer v2 0/3] add check/rename for already-existing ZFS rpool

2024-07-11 Thread Christoph Heiss
v3 out: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064635.html

On Thu, Jul 11, 2024 at 11:56:37AM GMT, Christoph Heiss wrote:
> Pretty straight forward overall, implements a check for an existing
> `rpool` on the system and ask the user whether they would like to rename
> it, much in the same way as it works for VGs already.
>
> Without this, the installer would silently create a second (and thus
> conflicting) `rpool` and cause a boot failure after the installation,
> since it does not know which pool to import exactly.
>
> v1: https://lists.proxmox.com/pipermail/pve-devel/2024-May/063874.html
>
> Notable changes v1 -> v2:
>   * incorporated Aaron suggestions from v1
>   * rewrote tests to use a pre-defined input instead
>   * moved pool renaming to own subroutine
>   * documented all new subroutines
>   * split out tests into own patch
>
> Christoph Heiss (3):
>   proxmox: add zfs module for retrieving importable zpool info
>   test: add test cases for new zfs module
>   low-level: install: check for already-existing `rpool` on install
>
>  Proxmox/Install.pm|  30 +++
>  Proxmox/Makefile  |   1 +
>  Proxmox/Sys/ZFS.pm| 109 ++
>  test/Makefile |   7 ++-
>  test/zfs-get-pool-list.pl |  57 
>  5 files changed, 203 insertions(+), 1 deletion(-)
>  create mode 100644 Proxmox/Sys/ZFS.pm
>  create mode 100755 test/zfs-get-pool-list.pl
>
> --
> 2.44.0
>
>
>
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer 04/14] common: simplify filesystem type serializing & Display trait impl

2024-07-11 Thread Stefan Hanreich
On 7/10/24 15:27, Christoph Heiss wrote:
> +impl<'de> Deserialize<'de> for FsType {
> +fn deserialize(deserializer: D) -> Result
> +where
> +D: serde::Deserializer<'de>,
> +{
> +let fs: String = Deserialize::deserialize(deserializer)?;
> +
> +match fs.as_str() {
> +"ext4" => Ok(FsType::Ext4),
> +"xfs" => Ok(FsType::Xfs),
> +"zfs (RAID0)" => Ok(FsType::Zfs(ZfsRaidLevel::Raid0)),
> +"zfs (RAID1)" => Ok(FsType::Zfs(ZfsRaidLevel::Raid1)),
> +"zfs (RAID10)" => Ok(FsType::Zfs(ZfsRaidLevel::Raid10)),
> +"zfs (RAIDZ-1)" => Ok(FsType::Zfs(ZfsRaidLevel::RaidZ)),
> +"zfs (RAIDZ-2)" => Ok(FsType::Zfs(ZfsRaidLevel::RaidZ2)),
> +"zfs (RAIDZ-3)" => Ok(FsType::Zfs(ZfsRaidLevel::RaidZ3)),
> +"btrfs (RAID0)" => Ok(FsType::Btrfs(BtrfsRaidLevel::Raid0)),
> +"btrfs (RAID1)" => Ok(FsType::Btrfs(BtrfsRaidLevel::Raid1)),
> +"btrfs (RAID10)" => Ok(FsType::Btrfs(BtrfsRaidLevel::Raid10)),
> +_ => Err(serde::de::Error::custom("could not find file system: 
> {fs}")),
> +}
> +}
> +}

Maybe we could implement FromStr here and use
serde_plain::derive_deserialize_from_fromstr ?

We could even implement FromStr for BtrfsRaidLevel and ZfsRaidLevel and
then use that here, but it might be a bit overkill for just this

> +impl Serialize for FsType {
> +fn serialize(&self, serializer: S) -> Result
> +where
> +S: serde::Serializer,
> +{
> +let value = match self {
> +// proxinstall::$fssetup
> +FsType::Ext4 => "ext4",
> +FsType::Xfs => "xfs",
> +// proxinstall::get_zfs_raid_setup()
> +FsType::Zfs(level) => &format!("zfs ({level})"),
> +// proxinstall::get_btrfs_raid_setup()
> +FsType::Btrfs(level) => &format!("btrfs ({level})"),
> +};
> +
> +serializer.collect_str(value)
> +}
> +}

Same as above but with Display and
serde_plain::derive_display_from_serialize


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer 10/14] auto-installer: tests: replace left/right with got/expected in output

2024-07-11 Thread Stefan Hanreich
On 7/10/24 15:27, Christoph Heiss wrote:
> Makes more sense and makes debugging easier.
> 
> Signed-off-by: Christoph Heiss 
> ---
>  proxmox-auto-installer/tests/parse-answer.rs | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/proxmox-auto-installer/tests/parse-answer.rs 
> b/proxmox-auto-installer/tests/parse-answer.rs
> index 450915a..81079b8 100644
> --- a/proxmox-auto-installer/tests/parse-answer.rs
> +++ b/proxmox-auto-installer/tests/parse-answer.rs
> @@ -77,7 +77,7 @@ fn test_parse_answers() {
>  let compare: Value = serde_json::from_str(&compare_raw).unwrap();
>  if config != compare {
>  panic!(
> -"Test {} failed:\nleft:  {:#?}\nright: {:#?}\n",
> +"Test {} failed:\ngot: {:#?}\nexpected: {:#?}\n",
>  name, config, compare
>  );
>  }

maybe use assert_eq!() here altogether?

Also above in the file use assert!(!runtime_info.disks.is_empty()) ?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer 06/14] common: split out installer setup files loading functionality

2024-07-11 Thread Stefan Hanreich
On 7/10/24 15:27, Christoph Heiss wrote:
> diff --git a/proxmox-installer-common/src/setup.rs 
> b/proxmox-installer-common/src/setup.rs
> index 9131ac9..29137bf 100644
> --- a/proxmox-installer-common/src/setup.rs
> +++ b/proxmox-installer-common/src/setup.rs
> @@ -163,24 +163,29 @@ pub fn installer_setup(in_test_mode: bool) -> 
> Result<(SetupInfo, LocaleInfo, Run
>  } else {
>  crate::RUNTIME_DIR.to_owned()
>  };
> -let path = PathBuf::from(base_path);
>  
> +load_installer_setup_files(&PathBuf::from(base_path))
> +}
> +
> +pub fn load_installer_setup_files(
> +base_path: &Path,

Could use AsRef here since it's public API. In the test specific
code we could also use it for parameters, but there it's not really
important since it's not public API.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer 12/14] fix #5536: add post-hook utility for sending notifications after auto-install

2024-07-11 Thread Stefan Hanreich



On 7/10/24 15:27, Christoph Heiss wrote:
> +impl Answer {
> +pub fn from_reader(reader: impl BufRead) -> Result {
> +let mut buffer = String::new();
> +let lines = reader.lines();
> +for line in lines {
> +buffer.push_str(&line.unwrap());
> +buffer.push('\n');
> +}
> +
> +toml::from_str(&buffer).map_err(|err| format_err!("Failed parsing 
> answer file: {err}"))
> +}
> +}

maybe better impl TryFrom for Answer ? Or at least call
the method try_from_reader if it returns a result?

>  #[derive(Clone, Deserialize, Debug)]
>  #[serde(deny_unknown_fields)]
>  pub struct Global {
> diff --git a/proxmox-auto-installer/src/bin/proxmox-auto-installer.rs 
> b/proxmox-auto-installer/src/bin/proxmox-auto-installer.rs
> index bf6f8fb..aab0f1f 100644
> --- a/proxmox-auto-installer/src/bin/proxmox-auto-installer.rs
> +++ b/proxmox-auto-installer/src/bin/proxmox-auto-installer.rs
> @@ -42,16 +42,7 @@ fn auto_installer_setup(in_test_mode: bool) -> 
> Result<(Answer, UdevInfo)> {
>  .map_err(|err| format_err!("Failed to retrieve udev info 
> details: {err}"))?
>  };
>  
> -let mut buffer = String::new();
> -let lines = std::io::stdin().lock().lines();
> -for line in lines {
> -buffer.push_str(&line.unwrap());
> -buffer.push('\n');
> -}
> -
> -let answer: Answer =
> -toml::from_str(&buffer).map_err(|err| format_err!("Failed parsing 
> answer file: {err}"))?;
> -
> +let answer = Answer::from_reader(std::io::stdin().lock())?;
>  Ok((answer, udev_info))
>  }
>  
> @@ -91,8 +82,6 @@ fn main() -> ExitCode {
>  }
>  }
>  
> -// TODO: (optionally) do a HTTP post with basic system info, like host 
> SSH public key(s) here
> -
>  ExitCode::SUCCESS
>  }
>  
> diff --git a/proxmox-fetch-answer/src/fetch_plugins/http.rs 
> b/proxmox-fetch-answer/src/fetch_plugins/http.rs
> index a6a8de0..4317430 100644
> --- a/proxmox-fetch-answer/src/fetch_plugins/http.rs
> +++ b/proxmox-fetch-answer/src/fetch_plugins/http.rs
> @@ -68,7 +68,7 @@ impl FetchFromHTTP {
>  let payload = SysInfo::as_json()?;
>  info!("Sending POST request to '{answer_url}'.");
>  let answer =
> -proxmox_installer_common::http::post(answer_url, 
> fingerprint.as_deref(), payload)?;
> +proxmox_installer_common::http::post(&answer_url, 
> fingerprint.as_deref(), payload)?;
>  Ok(answer)
>  }
>  
> diff --git a/proxmox-fetch-answer/src/fetch_plugins/partition.rs 
> b/proxmox-fetch-answer/src/fetch_plugins/partition.rs
> index 4472922..f07389b 100644
> --- a/proxmox-fetch-answer/src/fetch_plugins/partition.rs
> +++ b/proxmox-fetch-answer/src/fetch_plugins/partition.rs
> @@ -31,7 +31,7 @@ impl FetchFromPartition {
>  }
>  
>  fn path_exists_logged(file_name: &str, search_path: &str) -> Option 
> {
> -let path = Path::new(search_path).join(&file_name);
> +let path = Path::new(search_path).join(file_name);
>  info!("Testing partition search path {path:?}");
>  match path.try_exists() {
>  Ok(true) => Some(path),
> diff --git a/proxmox-installer-common/src/http.rs 
> b/proxmox-installer-common/src/http.rs
> index 4a5d444..b754ed8 100644
> --- a/proxmox-installer-common/src/http.rs
> +++ b/proxmox-installer-common/src/http.rs
> @@ -15,7 +15,7 @@ use ureq::{Agent, AgentBuilder};
>  /// * `url` - URL to call
>  /// * `fingerprint` - SHA256 cert fingerprint if certificate pinning should 
> be used. Optional.
>  /// * `payload` - The payload to send to the server. Expected to be a JSON 
> formatted string.
> -pub fn post(url: String, fingerprint: Option<&str>, payload: String) -> 
> Result {
> +pub fn post(url: &str, fingerprint: Option<&str>, payload: String) -> 
> Result {
>  let answer;
>  
>  if let Some(fingerprint) = fingerprint {
> @@ -27,7 +27,7 @@ pub fn post(url: String, fingerprint: Option<&str>, 
> payload: String) -> Result  let agent: Agent = 
> AgentBuilder::new().tls_config(Arc::new(tls_config)).build();
>  
>  answer = agent
> -.post(&url)
> +.post(url)
>  .set("Content-Type", "application/json; charset=utf-8")
>  .send_string(&payload)?
>  .into_string()?;
> @@ -47,7 +47,7 @@ pub fn post(url: String, fingerprint: Option<&str>, 
> payload: String) -> Result  .tls_config(Arc::new(tls_config))
>  .build();
>  answer = agent
> -.post(&url)
> +.post(url)
>  .set("Content-Type", "application/json; charset=utf-8")
>  .timeout(std::time::Duration::from_secs(60))
>  .send_string(&payload)?
> diff --git a/proxmox-installer-common/src/options.rs 
> b/proxmox-installer-common/src/options.rs
> index 9f6131b..b209587 100644
> --- a/proxmox-installer-common/src/options.rs
> +++ b/proxmox-installer-common/src/options.rs
> @@ -45,6 +45,10 @@ impl FsType {
>  pub fn is_b

Re: [pve-devel] [PATCH installer 00/14] fix #5536: implement post-(auto-)installation notification mechanism

2024-07-11 Thread Stefan Hanreich
Did a quick smoke test of this series by creating an ISO with an answer
file baked in and checking the response via `nc -l`. Review is inline.

Consider this:

Tested-By: Stefan Hanreich 
Reviewed-By: Stefan Hanreich 


On 7/10/24 15:27, Christoph Heiss wrote:
> This implements a mechanism for post-installation "notifications" via a
> POST request [0] when using the auto-installer.
> 
> It's implemented as a separate, small utility to facilitate separation
> of concerns and make the information gathering easier by having it
> isolated in one place.
> 
> Patches #1 through #10 are simply clean-ups, refactors, etc. that were
> done along the way, which do not impact functionality in any way.
> 
> Most interesting here will be patch #12, which adds the actual
> implementation of the post-hook. (Bind-)mounting the installed host
> system is done using the existing `proxmox-chroot` utility, and the HTTP
> POST functionality can fortunately be re-used 1:1 from
> `proxmox-fetch-answer`.
> 
> I've also included an example of how the JSON body (pretty-printed for
> readability) of such a post-installation request would look like below,
> for reference.
> 
> Tested this with both PVE and PBS ISOs, PMG did not (yet) have a
> release with an auto-installation capable ISO. The only product-specific
> code is the version detection in `proxmox-post-hook`, which - since it
> works the same for PVE and PMG - be no obstacle.
> 
> [0] https://bugzilla.proxmox.com/show_bug.cgi?id=5536
> 
> {
>   "debian-version": "12.5",
>   "product-version": "pve-manager/8.2.2/9355359cd7afbae4",
>   "kernel-version": "proxmox-kernel-6.8.4-2-pve-signed",
>   "boot-type": "bios",
>   "filesystem": "zfs (RAID1)",
>   "fqdn": "host.domain",
>   "machine-id": "f4bf9711783248b7aaffe3ccbca3e3dc",
>   "bootdisk": [
> {
>   "size": 8589934592,
>   "udev-properties": {
> "DEVNAME": "/dev/vda", [..]
>   }
> },
> {
>   "size": 8589934592,
>   "udev-properties": {
> "DEVNAME": "/dev/vdb", [..]
>   }
> }
>   ],
>   "management-nic": {
> "mac": "de:ad:f0:0d:12:34",
> "address": "10.0.0.10/24",
> "udev-properties": {
>   "INTERFACE": "enp6s18", [..]
> }
>   },
>   "ssh-public-host-keys": {
> "ecdsa": "ecdsa-sha2-nistp256 [..] root@host",
> "ed25519": "ssh-ed25519 [..] root@host",
> "rsa": "ssh-rsa [..] root@host",
>   }
> }
> 
> Christoph Heiss (14):  chroot: print full anyhow message
>   tree-wide: fix some typos
>   tree-wide: collect hardcoded installer runtime directory strings into
> constant
>   common: simplify filesystem type serializing & Display trait impl
>   common: setup: serialize `target_hd` as string explicitly
>   common: split out installer setup files loading functionality
>   debian: strip unused library dependencies
>   fetch-answer: move http-related code to gated module in
> installer-common
>   tree-wide: convert some more crates to use workspace dependencies
>   auto-installer: tests: replace left/right with got/expected in output
>   auto-installer: answer: add `posthook` section
>   fix #5536: add post-hook utility for sending notifications after
> auto-install
>   fix #5536: post-hook: add some unit tests
>   unconfigured.sh: run proxmox-post-hook after successful auto-install
> 
>  Cargo.toml|  11 +
>  Makefile  |   8 +-
>  debian/control|   1 +
>  debian/install|   1 +
>  debian/rules  |   9 +
>  proxmox-auto-install-assistant/Cargo.toml |  14 +-
>  proxmox-auto-installer/Cargo.toml |  15 +-
>  proxmox-auto-installer/src/answer.rs  |  27 +-
>  .../src/bin/proxmox-auto-installer.rs |  15 +-
>  proxmox-auto-installer/src/sysinfo.rs |  10 +-
>  proxmox-auto-installer/src/utils.rs   |  15 +-
>  proxmox-auto-installer/tests/parse-answer.rs  |  42 +-
>  proxmox-chroot/Cargo.toml |   8 +-
>  proxmox-chroot/src/main.rs|  19 +-
>  proxmox-fetch-answer/Cargo.toml   |  17 +-
>  .../src/fetch_plugins/http.rs | 100 +---
>  .../src/fetch_plugins/partition.rs|  14 +-
>  proxmox-installer-common/Cargo.toml   |  26 +-
>  proxmox-installer-common/src/http.rs  |  94 
>  proxmox-installer-common/src/lib.rs   |   5 +
>  proxmox-installer-common/src/options.rs   | 109 ++--
>  proxmox-installer-common/src/setup.rs | 108 +---
>  proxmox-installer-common/src/utils.rs |   2 +
>  proxmox-post-hook/Cargo.toml  |  19 +
>  proxmox-post-hook/src/main.rs | 498 ++
>  proxmox-tui-installer/Cargo.toml  |   8 +-
>  proxmox-tui-installer/src/setup.rs|   2 +-
>  unconfigured.sh   |   7 +-
>  28 files changed, 862 insertions(+), 34

[pve-devel] Interest in a file manager interface within pve-manager?

2024-07-11 Thread Blythe, Nathan F. - US via pve-devel
--- Begin Message ---
Hello,

We have a need for a non-network based (i.e. hypercall / out-of-band) file 
browser for containers and VMs within the pve-manager web UI. I'd like to be 
able to select a VM or container, click "More", and choose "File manager" and 
then browse the VM or container's file systems and upload or download files 
from my local system, through the web-browser.

For VMs (qemu), the qemu-guest-agent provides the ability to read/write guest 
files, and the pve API exposes it (with strict file size limits) as 
nodes->{node}->qemu->{vmid}->agent->file-*. Presumably, a web UI file manager 
would use that API endpoint, plus maybe an enhancement for directory listing 
and chunked upload/download (to support arbitrary size files).

For containers (LXC), the pct application can read/write files by (I think) 
mounting the container root file system on the host and touching it directly. 
But there's no corresponding API endpoint. If there were, it could be used by a 
future web UI file manager.

So to implement this feature, I think there are 5 steps:

  1.  Enhance the qemu-guest-agent and qm to support reading directory 
structure and creating directories.
  2.  Enhance the pve API to expose these new features like file-read and 
file-write.
  3.  Enhance the pve API to support chunked file read and write, for large 
files. Maybe also requiring an enhancement to the qemu-guest-agent?
  4.  Enhance the pve API to support chunked file read/write, whole file 
read/write, and directory structure/creating directories (through direct 
filesystem interaction on the host).
  5.  Write a JS front-end file manager, invokable from the Other menu, which 
uses these new/enhanced API endpoints to implement a general purpose 
out-of-band file manager.

Is there interest from the proxmox developers for something like this? If we 
started working on some of these items, would there be interest in accepting 
patches? Does my general approach look sound?

(We also investigated using SPICE's folder-sharing capabilities and the 
spice-proxy provided by the host, but that doesn't work with the Windows 
virt-viewer client, only the Linux client. It looks like there was some 
interest in the past in enhancing xterm.js to support xmodem file transfer, but 
it didn't go far.)

Regards,
Nathan Blythe
CACI



This electronic message contains information from CACI International Inc or 
subsidiary companies, which may be company sensitive, proprietary, privileged 
or otherwise protected from disclosure. The information is intended to be used 
solely by the recipient(s) named above. If you are not an intended recipient, 
be aware that any review, disclosure, copying, distribution or use of this 
transmission or its contents is prohibited. If you have received this 
transmission in error, please notify the sender immediately.
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel