[pve-devel] [PATCH installer v4 4/4] low-level: install: check for already-existing `rpool` on install

2024-07-16 Thread Christoph Heiss
.. much in the same manner as the detection for LVM works.

zpools can only be renamed by importing them with a new name, so
unfortunately the import-export dance is needed.

Signed-off-by: Christoph Heiss 
---
Changes v3 -> v4:
  * rename $response_ok -> $do_rename for clarity in

Changes v2 -> v3:
  * automatically rename existing rpools on auto-installs too

Changes v1 -> v2:
  * use unique pool id for renaming instead of generating random id
ourselves
  * moved renaming functionality to own subroutine in new
Proxmox::Sys::ZFS module

 Proxmox/Install.pm | 33 +
 Proxmox/Sys/ZFS.pm | 20 ++--
 2 files changed, 51 insertions(+), 2 deletions(-)

diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
index 9efa4f1..19fd88d 100644
--- a/Proxmox/Install.pm
+++ b/Proxmox/Install.pm
@@ -16,6 +16,7 @@ use Proxmox::Install::StorageConfig;
 use Proxmox::Sys::Block qw(get_cached_disks wipe_disk partition_bootable_disk);
 use Proxmox::Sys::Command qw(run_command syscmd);
 use Proxmox::Sys::File qw(file_read_firstline file_write_all);
+use Proxmox::Sys::ZFS;
 use Proxmox::UI;
 
 # TODO: move somewhere better?
@@ -169,9 +170,41 @@ sub btrfs_create {
 syscmd($cmd);
 }
 
+sub zfs_ask_existing_zpool_rename {
+my ($pool_name) = @_;
+
+# At this point, no pools should be imported/active
+my $exported_pools = Proxmox::Sys::ZFS::get_exported_pools();
+
+foreach (@$exported_pools) {
+   next if $_->{name} ne $pool_name || $_->{state} ne 'ONLINE';
+   my $renamed_pool = "$_->{name}-OLD-$_->{id}";
+
+   my $do_rename = 
Proxmox::Install::Config::get_existing_storage_auto_rename();
+   if (!$do_rename) {
+   $do_rename = Proxmox::UI::prompt(
+   "A ZFS pool named '$_->{name}' (id $_->{id}) already exists on 
the system.\n\n" .
+   "Do you want to rename the pool to '$renamed_pool' before 
continuing " .
+   "or cancel the installation?"
+   );
+   }
+
+   # Import the pool using its id, as that is unique and works even if 
there are
+   # multiple zpools with the same name.
+   if ($do_rename) {
+   Proxmox::Sys::ZFS::rename_pool($_->{id}, $renamed_pool);
+   } else {
+   warn "Canceled installation as requested by user, due to already 
existing ZFS pool '$pool_name'\n";
+   die "\n"; # causes abort without re-showing an error dialogue
+   }
+}
+}
+
 sub zfs_create_rpool {
 my ($vdev, $pool_name, $root_volume_name) = @_;
 
+zfs_ask_existing_zpool_rename($pool_name);
+
 my $iso_env = Proxmox::Install::ISOEnv::get();
 
 my $zfs_opts = Proxmox::Install::Config::get_zfs_opt();
diff --git a/Proxmox/Sys/ZFS.pm b/Proxmox/Sys/ZFS.pm
index 9232d1a..c5a807e 100644
--- a/Proxmox/Sys/ZFS.pm
+++ b/Proxmox/Sys/ZFS.pm
@@ -3,10 +3,10 @@ package Proxmox::Sys::ZFS;
 use strict;
 use warnings;
 
-use Proxmox::Sys::Command qw(run_command);
+use Proxmox::Sys::Command qw(run_command syscmd);
 
 use base qw(Exporter);
-our @EXPORT_OK = qw(get_exported_pools);
+our @EXPORT_OK = qw(get_exported_pools rename_pool);
 
 # Parses the output of running `zpool import`, which shows all importable
 # ZFS pools.
@@ -90,4 +90,20 @@ sub get_exported_pools {
 return zpool_import_parse_output($fh);
 }
 
+# Renames a importable ZFS pool by importing it with a new name and then
+# exporting again.
+#
+# Arguments:
+#
+# $poolid - Unique, numeric identifier of the pool to rename
+# $new_name - New name for the pool
+sub rename_pool {
+my ($poolid, $new_name) = @_;
+
+syscmd("zpool import -f $poolid $new_name") == 0 ||
+   die "failed to import zfs pool with id '$poolid' with new name 
'$new_name'\n";
+syscmd("zpool export $new_name") == 0 ||
+   warn "failed to export renamed zfs pool '$new_name'\n";
+}
+
 1;
-- 
2.45.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer v4 3/4] install: config: rename option lvm_auto_rename -> existing_storage_auto_rename

2024-07-16 Thread Christoph Heiss
As this is an internal option for the low-level installer anyway, no
real functional changes here.

Signed-off-by: Christoph Heiss 
---
Changes v3 -> v4:
  * rename $response_ok -> $do_rename for clarity in
ask_existing_vg_rename_or_abort()

Changes v2 -> v3:
  * new patch

 Proxmox/Install.pm| 8 
 Proxmox/Install/Config.pm | 6 +++---
 proxmox-auto-installer/src/utils.rs   | 2 +-
 .../tests/resources/parse_answer/disk_match.json  | 2 +-
 .../tests/resources/parse_answer/disk_match_all.json  | 2 +-
 .../tests/resources/parse_answer/disk_match_any.json  | 2 +-
 .../tests/resources/parse_answer/minimal.json | 2 +-
 .../tests/resources/parse_answer/nic_matching.json| 2 +-
 .../tests/resources/parse_answer/specific_nic.json| 2 +-
 .../tests/resources/parse_answer/zfs.json | 2 +-
 proxmox-installer-common/src/setup.rs | 2 +-
 proxmox-tui-installer/src/setup.rs| 2 +-
 12 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
index c0f8955..9efa4f1 100644
--- a/Proxmox/Install.pm
+++ b/Proxmox/Install.pm
@@ -384,8 +384,8 @@ sub ask_existing_vg_rename_or_abort {
$vg->{new_vgname} = "$vgname-OLD-$short_uid";
 }
 
-my $response_ok = Proxmox::Install::Config::get_lvm_auto_rename();
-if (!$response_ok) {
+my $do_rename = 
Proxmox::Install::Config::get_existing_storage_auto_rename();
+if (!$do_rename) {
my $message = "Detected existing '$vgname' Volume Group(s)! Do you want 
to:\n";
 
for my $vg_uuid (keys %$duplicate_vgs) {
@@ -394,10 +394,10 @@ sub ask_existing_vg_rename_or_abort {
}
$message .= "or cancel the installation?";
 
-   $response_ok = Proxmox::UI::prompt($message);
+   $do_rename = Proxmox::UI::prompt($message);
 }
 
-if ($response_ok) {
+if ($do_rename) {
for my $vg_uuid (keys %$duplicate_vgs) {
my $vg = $duplicate_vgs->{$vg_uuid};
my $new_vgname = $vg->{new_vgname};
diff --git a/Proxmox/Install/Config.pm b/Proxmox/Install/Config.pm
index ecd8a74..e449039 100644
--- a/Proxmox/Install/Config.pm
+++ b/Proxmox/Install/Config.pm
@@ -82,7 +82,7 @@ my sub init_cfg {
# TODO: single disk selection config
target_hd => undef,
disk_selection => {},
-   lvm_auto_rename => 0,
+   existing_storage_auto_rename => 0,
 
# locale
country => $country,
@@ -244,7 +244,7 @@ sub get_dns { return get('dns'); }
 sub set_target_cmdline { set_key('target_cmdline', $_[0]); }
 sub get_target_cmdline { return get('target_cmdline'); }
 
-sub set_lvm_auto_rename { set_key('lvm_auto_rename', $_[0]); }
-sub get_lvm_auto_rename { return get('lvm_auto_rename'); }
+sub set_existing_storage_auto_rename { set_key('existing_storage_auto_rename', 
$_[0]); }
+sub get_existing_storage_auto_rename { return 
get('existing_storage_auto_rename'); }
 
 1;
diff --git a/proxmox-auto-installer/src/utils.rs 
b/proxmox-auto-installer/src/utils.rs
index 202ad41..cc47f5f 100644
--- a/proxmox-auto-installer/src/utils.rs
+++ b/proxmox-auto-installer/src/utils.rs
@@ -328,7 +328,7 @@ pub fn parse_answer(
 zfs_opts: None,
 target_hd: None,
 disk_selection: BTreeMap::new(),
-lvm_auto_rename: 1,
+existing_storage_auto_rename: 1,
 
 country: answer.global.country.clone(),
 timezone: answer.global.timezone.clone(),
diff --git 
a/proxmox-auto-installer/tests/resources/parse_answer/disk_match.json 
b/proxmox-auto-installer/tests/resources/parse_answer/disk_match.json
index 3a117b6..2618fd4 100644
--- a/proxmox-auto-installer/tests/resources/parse_answer/disk_match.json
+++ b/proxmox-auto-installer/tests/resources/parse_answer/disk_match.json
@@ -10,7 +10,7 @@
"8": "8",
"9": "9"
   },
-  "lvm_auto_rename": 1,
+  "existing_storage_auto_rename": 1,
   "filesys": "zfs (RAID10)",
   "gateway": "192.168.1.1",
   "hdsize": 223.57088470458984,
diff --git 
a/proxmox-auto-installer/tests/resources/parse_answer/disk_match_all.json 
b/proxmox-auto-installer/tests/resources/parse_answer/disk_match_all.json
index 5325fc3..6cfb96a 100644
--- a/proxmox-auto-installer/tests/resources/parse_answer/disk_match_all.json
+++ b/proxmox-auto-installer/tests/resources/parse_answer/disk_match_all.json
@@ -7,7 +7,7 @@
   "disk_selection": {
"9": "9"
   },
-  "lvm_auto_rename": 1,
+  "existing_storage_auto_rename": 1,
   "filesys": "zfs (RAID0)",
   "gateway": "192.168.1.1",
   "hdsize": 223.57088470458984,
diff --git 
a/proxmox-auto-installer/tests/resources/parse_answer/disk_match_any.json 
b/proxmox-auto-installer/tests/resources/parse_answer/disk_match_any.json
index 18e22d1..1921b34 100644
--- a/proxmox-auto-installer/tests/resources/parse_answer/disk_match_any.json
+++ b/proxmox-auto-installer/tests/resou

[pve-devel] [PATCH installer v4 0/4] add check/rename for already-existing ZFS rpool

2024-07-16 Thread Christoph Heiss
Pretty straight forward overall, implements a check for an existing
`rpool` on the system and ask the user whether they would like to rename
it, much in the same way as it works for VGs already.

Without this, the installer would silently create a second (and thus
conflicting) `rpool` and cause a boot failure after the installation,
since it does not know which pool to import exactly.

v1: https://lists.proxmox.com/pipermail/pve-devel/2024-May/063874.html
v2: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064619.html
v3: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064635.html

Notable changes v3 -> v4:
  * rebased on latest master
  * rename $response_ok -> $do_rename for clarity, as suggested by Aaron

Notable changes v2 -> v3:
  * make low-level option lvm_auto_rename more generic
  * skip rename prompt in auto-install mode

Notable changes v1 -> v2:
  * incorporated Aarons suggestions from v1
  * rewrote tests to use a pre-defined input instead
  * moved pool renaming to own subroutine
  * documented all new subroutines
  * split out tests into own patch

Christoph Heiss (4):
  proxmox: add zfs module for retrieving importable zpool info
  test: add test cases for new zfs module
  install: config: rename option lvm_auto_rename ->
existing_storage_auto_rename
  low-level: install: check for already-existing `rpool` on install

 Proxmox/Install.pm|  41 ++-
 Proxmox/Install/Config.pm |   6 +-
 Proxmox/Makefile  |   1 +
 Proxmox/Sys/ZFS.pm| 109 ++
 proxmox-auto-installer/src/utils.rs   |   2 +-
 .../resources/parse_answer/disk_match.json|   2 +-
 .../parse_answer/disk_match_all.json  |   2 +-
 .../parse_answer/disk_match_any.json  |   2 +-
 .../tests/resources/parse_answer/minimal.json |   2 +-
 .../resources/parse_answer/nic_matching.json  |   2 +-
 .../resources/parse_answer/specific_nic.json  |   2 +-
 .../tests/resources/parse_answer/zfs.json |   2 +-
 proxmox-installer-common/src/setup.rs |   2 +-
 proxmox-tui-installer/src/setup.rs|   2 +-
 test/Makefile |   7 +-
 test/zfs-get-pool-list.pl |  57 +
 16 files changed, 223 insertions(+), 18 deletions(-)
 create mode 100644 Proxmox/Sys/ZFS.pm
 create mode 100755 test/zfs-get-pool-list.pl

-- 
2.44.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer v4 1/4] proxmox: add zfs module for retrieving importable zpool info

2024-07-16 Thread Christoph Heiss
Signed-off-by: Christoph Heiss 
---
Changes v3 -> v4:
  * no changes

Changes v2 -> v3:
  * no changes

Changes v1 -> v2:
  * incorporated Aaron suggestion to use anonymous arrays instead
  * added documentation
  * renamed parsing function parse_pool_list -> zpool_import_parse_output
  * split out tests into separate patch

 Proxmox/Makefile   |  1 +
 Proxmox/Sys/ZFS.pm | 93 ++
 2 files changed, 94 insertions(+)
 create mode 100644 Proxmox/Sys/ZFS.pm

diff --git a/Proxmox/Makefile b/Proxmox/Makefile
index 9561d9b..035626b 100644
--- a/Proxmox/Makefile
+++ b/Proxmox/Makefile
@@ -17,6 +17,7 @@ PERL_MODULES=\
 Sys/File.pm \
 Sys/Net.pm \
 Sys/Udev.pm \
+Sys/ZFS.pm \
 UI.pm \
 UI/Base.pm \
 UI/Gtk3.pm \
diff --git a/Proxmox/Sys/ZFS.pm b/Proxmox/Sys/ZFS.pm
new file mode 100644
index 000..9232d1a
--- /dev/null
+++ b/Proxmox/Sys/ZFS.pm
@@ -0,0 +1,93 @@
+package Proxmox::Sys::ZFS;
+
+use strict;
+use warnings;
+
+use Proxmox::Sys::Command qw(run_command);
+
+use base qw(Exporter);
+our @EXPORT_OK = qw(get_exported_pools);
+
+# Parses the output of running `zpool import`, which shows all importable
+# ZFS pools.
+# Unfortunately, `zpool` does not support JSON or any other machine-readable
+# format for output, so we have to do it the hard way.
+#
+# Example output of `zpool import` looks like this:
+#
+#   root@proxmox:/# zpool import
+#  pool: testpool
+#id: 4958685680270539150
+# state: ONLINE
+#action: The pool can be imported using its name or numeric identifier.
+#config:
+#
+#   testpoolONLINE
+# vdc   ONLINE
+# vdd   ONLINE
+#
+#  pool: rpool
+#id: 9412322616744093413
+# state: ONLINE
+#   status: The pool was last accessed by another system.
+#action: The pool can be imported using its name or numeric identifier and
+#   the '-f' flag.
+#  see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
+#config:
+#
+#   rpool   ONLINE
+# mirror-0  ONLINE
+#   vda3ONLINE
+#   vdb3ONLINE
+#
+sub zpool_import_parse_output {
+my ($fh) = @_;
+
+my $pools = []; # all found pools
+my $pool = {}; # last found pool in output
+
+while (my $line = <$fh>) {
+   # first, if we find the start of a new pool, add it to the list with
+   # its name
+   if ($line =~ /^\s+pool: (.+)$/) {
+   # push the previous parsed pool to the result list
+   push @$pools, $pool if %$pool;
+   $pool = { name => $1 };
+   next;
+   }
+
+   # ignore any (garbage) output before the actual list, just in case
+   next if !%$pool;
+
+   # add any possibly-useful attribute to the last (aka. current) pool
+   if ($line =~ /^\s*(id|state|status|action): (.+)$/) {
+   chomp($pool->{$1} = $2);
+   next;
+   }
+}
+
+# add the final parsed pool to the list
+push @$pools, $pool if %$pool;
+return $pools;
+}
+
+# Returns an array of all importable ZFS pools on the system.
+# Each entry is a hash of the format:
+#
+# {
+# name => '',
+# id => ,
+# /* see also zpool_state_to_name() in lib/libzfs/libzfs_pool.c /*
+# state => 'OFFLINE' | 'REMOVED' | 'FAULTED' | 'SPLIT' | 'UNAVAIL' \
+#  | 'FAULTED' | 'DEGRADED' | 'ONLINE',
+# status => '', optional
+# action => '', optional
+# }
+sub get_exported_pools {
+my $raw = run_command(['zpool', 'import']);
+open (my $fh, '<', \$raw) or die 'failed to open in-memory stream';
+
+return zpool_import_parse_output($fh);
+}
+
+1;
-- 
2.45.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer v4 2/4] test: add test cases for new zfs module

2024-07-16 Thread Christoph Heiss
Signed-off-by: Christoph Heiss 
---
Changes v3 -> v4:
  * no changes

Changes v2 -> v3:
  * no changes

Changes v1 -> v2:
  * new patch, split out from patch #1
  * rewrote tests to use a pre-defined input instead, thus being able
to enable the tests unconditionally

 test/Makefile |  7 -
 test/zfs-get-pool-list.pl | 57 +++
 2 files changed, 63 insertions(+), 1 deletion(-)
 create mode 100755 test/zfs-get-pool-list.pl

diff --git a/test/Makefile b/test/Makefile
index 99bf14e..c473af8 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -3,12 +3,13 @@ all:
 export PERLLIB=..
 
 .PHONY: check
-check: test-zfs-arc-max test-run-command test-parse-fqdn test-ui2-stdio
+check: test-zfs-arc-max test-run-command test-parse-fqdn test-ui2-stdio 
test-zfs-get-pool-list
 
 .PHONY: test-zfs-arc-max
 test-zfs-arc-max:
./zfs-arc-max.pl
 
+.PHONY: test-run-command
 test-run-command:
./run-command.pl
 
@@ -19,3 +20,7 @@ test-parse-fqdn:
 .PHONY: test-ui2-stdio
 test-ui2-stdio:
./ui2-stdio.pl
+
+.PHONY: test-zfs-get-pool-list
+test-zfs-get-pool-list:
+   ./zfs-get-pool-list.pl
diff --git a/test/zfs-get-pool-list.pl b/test/zfs-get-pool-list.pl
new file mode 100755
index 000..34e6b20
--- /dev/null
+++ b/test/zfs-get-pool-list.pl
@@ -0,0 +1,57 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use File::Temp;
+use Test::More tests => 8;
+
+use Proxmox::Sys::ZFS;
+use Proxmox::UI;
+
+my $log_file = File::Temp->new();
+Proxmox::Log::init($log_file->filename);
+
+Proxmox::UI::init_stdio();
+
+our $ZPOOL_IMPORT_TEST_OUTPUT =  { id => '4958685680270539150', state => 'ONLINE' },
+rpool => { id => '9412322616744093413', state => 'FAULTED' },
+};
+
+open(my $fh, '<', \$ZPOOL_IMPORT_TEST_OUTPUT);
+my $result = Proxmox::Sys::ZFS::zpool_import_parse_output($fh);
+while (my ($name, $info) = each %$pools) {
+my ($p) = grep { $_->{name} eq $name } @$result;
+ok(defined($p), "pool $name was found");
+is($p->{id}, $info->{id}, "pool $name has correct id");
+is($p->{state}, $info->{state}, "pool $name has correct state");
+like($p->{action}, qr/^The pool can be imported using its name or numeric 
identifier/,
+   "pool $name can be imported");
+}
-- 
2.45.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer v3 0/4] add check/rename for already-existing ZFS rpool

2024-07-16 Thread Christoph Heiss
v4 out: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064675.html

On Thu, Jul 11, 2024 at 01:57:52PM GMT, Christoph Heiss wrote:
> Pretty straight forward overall, implements a check for an existing
> `rpool` on the system and ask the user whether they would like to rename
> it, much in the same way as it works for VGs already.
>
> Without this, the installer would silently create a second (and thus
> conflicting) `rpool` and cause a boot failure after the installation,
> since it does not know which pool to import exactly.
>
> v1: https://lists.proxmox.com/pipermail/pve-devel/2024-May/063874.html
> v2: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064619.html
>
> Notable changes v2 -> v3:
>   * make low-level option lvm_auto_rename more generic
>   * skip rename prompt in auto-install mode
>
> Notable changes v1 -> v2:
>   * incorporated Aarons suggestions from v1
>   * rewrote tests to use a pre-defined input instead
>   * moved pool renaming to own subroutine
>   * documented all new subroutines
>   * split out tests into own patch
>
> Christoph Heiss (4):
>   test: add test cases for new zfs module
>   low-level: install: check for already-existing `rpool` on install
>   install: config: rename option lvm_auto_rename ->
> existing_storage_auto_rename
>   low-level: install: automatically rename existing rpools on
> auto-installs
>
>  Proxmox/Install.pm| 35 +++-
>  Proxmox/Install/Config.pm |  6 +-
>  Proxmox/Sys/ZFS.pm| 20 ++-
>  proxmox-auto-installer/src/utils.rs   |  2 +-
>  .../resources/parse_answer/disk_match.json|  2 +-
>  .../parse_answer/disk_match_all.json  |  2 +-
>  .../parse_answer/disk_match_any.json  |  2 +-
>  .../tests/resources/parse_answer/minimal.json |  2 +-
>  .../resources/parse_answer/nic_matching.json  |  2 +-
>  .../resources/parse_answer/specific_nic.json  |  2 +-
>  .../tests/resources/parse_answer/zfs.json |  2 +-
>  proxmox-installer-common/src/setup.rs |  2 +-
>  proxmox-tui-installer/src/setup.rs|  2 +-
>  test/Makefile |  7 ++-
>  test/zfs-get-pool-list.pl | 57 +++
>  15 files changed, 128 insertions(+), 17 deletions(-)
>  create mode 100755 test/zfs-get-pool-list.pl
>
> --
> 2.44.0
>
>
>
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH proxmox-ve-rs 09/21] sdn: add name types

2024-07-16 Thread Stefan Hanreich


On 6/27/24 12:56, Gabriel Goller wrote:
> On 26.06.2024 14:15, Stefan Hanreich wrote:
>> diff --git a/proxmox-ve-config/src/sdn/mod.rs
>> b/proxmox-ve-config/src/sdn/mod.rs
>> new file mode 100644
>> index 000..4e7c525
>> --- /dev/null
>> +++ b/proxmox-ve-config/src/sdn/mod.rs
>> @@ -0,0 +1,240 @@
>> +use std::{error::Error, fmt::Display, str::FromStr};
>> +
>> +use serde_with::DeserializeFromStr;
>> +
>> +use crate::firewall::types::Cidr;
>> +
>> +#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
>> +pub enum SdnNameError {
>> +    Empty,
>> +    TooLong,
>> +    InvalidSymbols,
>> +    InvalidSubnetCidr,
>> +    InvalidSubnetFormat,
>> +}
>> +
>> +impl Error for SdnNameError {}
>> +
>> +impl Display for SdnNameError {
>> +    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
>> +    f.write_str(match self {
>> +    SdnNameError::TooLong => "name too long",
>> +    SdnNameError::InvalidSymbols => "invalid symbols in name",
>> +    SdnNameError::InvalidSubnetCidr => "invalid cidr in name",
>> +    SdnNameError::InvalidSubnetFormat => "invalid format for
>> subnet name",
>> +    SdnNameError::Empty => "name is empty",
>> +    })
>> +    }
>> +}
>> +
> 
> Hmm, maybe we should pull in the `thiserror` crate here...
> There are a few error-enums that could benefit from it: SdnNameError,
> IpamError, SdnConfigError, IpRangeError.

Thought about this as well, I guess we depend on it in quite a few
crates already - so pulling it in here wouldn't be too bad.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH proxmox-ve-rs 12/21] sdn: add config module

2024-07-16 Thread Stefan Hanreich
On 6/27/24 12:54, Gabriel Goller wrote:
> On 26.06.2024 14:15, Stefan Hanreich wrote:
>> diff --git a/proxmox-ve-config/src/sdn/config.rs
>> b/proxmox-ve-config/src/sdn/config.rs
>> new file mode 100644
>> index 000..8454adf
>> --- /dev/null
>> +++ b/proxmox-ve-config/src/sdn/config.rs
>> @@ -0,0 +1,571 @@
>> [snip]
>> +impl Display for DhcpType {
>> +    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
>> +    f.write_str(match self {
>> +    DhcpType::Dnsmasq => "dnsmasq",
>> +    })
>> +    }
>> +}
>> +
>> +/// struct for deserializing a zone entry of the SDN running config
> 
> I think we usually begin doc-strings with a capital letter :)
> 
>> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd,
>> Ord)]
>> +pub struct ZoneRunningConfig {
>> +    #[serde(rename = "type")]
>> +    ty: ZoneType,
>> +    dhcp: DhcpType,
>> +}
>> +
>> +/// struct for deserializing the zones of the SDN running config
>> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Default)]
>> +pub struct ZonesRunningConfig {
>> +    ids: HashMap,
>> +}
>> +
>> +/// represents the dhcp-range property string used in the SDN
>> configuration
>> +#[derive(Clone, Debug, Deserialize, PartialEq, Eq, Hash, PartialOrd,
>> Ord)]
>> +pub struct DhcpRange {
>> +    #[serde(rename = "start-address")]
>> +    start: IpAddr,
>> +    #[serde(rename = "end-address")]
>> +    end: IpAddr,
>> +}
>> +
>> +impl ApiType for DhcpRange {
>> +    const API_SCHEMA: proxmox_schema::Schema = ObjectSchema::new(
>> +    "DHCP range",
>> +    &[
>> +    (
>> +    "end-address",
>> +    false,
>> +    &StringSchema::new("start address of DHCP
>> range").schema(),
> 
> Shouldn't this be "end address..." or is this intended?
> Same below.

Good catch, seems like i messed up when copy-pasting.

>> +    ),
>> +    (
>> +    "start-address",
>> +    false,
>> +    &StringSchema::new("end address of DHCP
>> range").schema(),
>> +    ),
>> +    ],
>> +    )
>> +    .schema();
>> +}
>> +
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] www: utils: fix inconsistency in host cpu usage display in search view

2024-07-16 Thread Christoph Heiss
Between the number of CPUs and the actual label, a space was missing -
resulting in an inconsistency vs. the "CPU usage" column.

Also, fix a rather nonsensical check for `maxcpu` above - noticed that
while comparing the implementation to that of Proxmox.Utils.render_cpu().

Signed-off-by: Christoph Heiss 
---
 www/manager6/Utils.js | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index f5608944d..6a0ecc98f 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1073,13 +1073,14 @@ Ext.define('PVE.Utils', {
}
var maxcpu = node.data.maxcpu || 1;
 
-   if (!Ext.isNumeric(maxcpu) && (maxcpu >= 1)) {
+   if (!Ext.isNumeric(maxcpu) || maxcpu < 1) {
return '';
}
 
var per = (record.data.cpu/maxcpu) * record.data.maxcpu * 100;
+   const cpu_label = maxcpu > 1 ? 'CPUs' : 'CPU';
 
-   return per.toFixed(1) + '% of ' + maxcpu.toString() + (maxcpu > 1 ? 
'CPUs' : 'CPU');
+   return `${per.toFixed(1)}% of ${maxcpu} ${cpu_label}`;
 },
 
 render_bandwidth: function(value) {
-- 
2.45.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC firewall/proxmox{-ve-rs, -firewall, -perl-rs} 00/21] autogenerate ipsets for sdn objects

2024-07-16 Thread Stefan Hanreich

On 6/28/24 15:46, Gabriel Goller wrote:
> Already talked with Stefan offlist, but some major things I noted when
> testing:
>  * It would be cool to have the generated IPSets visible in the IPSet
>    menu under Firewall (Datacenter). We could add a checkmark to hide
>    them (as there can be quite many) and make them read-only.

As already discussed, this might be a bit tricky to do read-only, since
we want to be able to override those IPSets (as is the case with
management, ipfilter, ..). It might make more sense to just additionally
display the IP sets and make them editable as any other. That way you
can easily append / delete IP addresses. Maybe give an indicator if this
is an auto-generated IPSet or an overridden one in the UI? Maybe I'll
make it a separate patch series that also implements this for the other
auto-generated IPsets.

>  * Zones can be restricted to specific Nodes, but we generate the
>    IPSets on every Node for all Zones. This means some IPSets are
>    useless and we could avoid generating them in the first place.

Will try and add this.

> 
> Otherwise the IPSet generation works fine. The algorithm for generating
> iptables ipset ranges also works perfectly!
> 

Thanks for the review!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH kernel 2/2] cherry-pick potential fix for NULL pointer deref with AMD Arcturus GPU during boot

2024-07-16 Thread Alexander Zeidler
> On 10.07.2024 11:37 GMT Fiona Ebner  wrote:
> The issue was reported in the enterprise support and is handled by
> Alexander Zeidler. It has the following trace [0] and causes an issue
> with the networking down the line, because 'udevadm settle' would time
> out. The customer reported that mainline kernel 6.9.3 booted fine.
> Looking at the new commits, this one stood out, as it heavily modifies
> the arcturus_get_power_limit() function. While not tagged for stable,
> it seems straightforward enough and has a good chance to fix the
> issue.
This commit fixes the problem for the customer after he has booted the
test kernel provided.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] cloudinit: RFC proposal for unwanted and unexpected regeneration of instance-id

2024-07-16 Thread MAbeeTT via pve-devel
--- Begin Message ---
Hello all,
I am Matias from Spain, raised in Argentina where I met PVE from
version 3 when trying to find an opensource KVM and container solution
for an internal lab.

After backup on proxmox PVE7 and restore in PVE8 [ 8.2.4 ] in my
personal lab I got new instance-id's hash for restored VMs.
Searching in the source code I see the root cause is the commit
cloudinit "pass through hostname via fqdn field" [0].

In certain conditions with the change in the commit without user
intervention in the VM a new key fqdn is created for the userdata, the
userdata info feeds the hash[1][2] which is in fact the value for the
key instance-id[3] of the meta-data file.

With a new instance-id the cloud-init agent in the VM takes the
"per-instance" configuration and actions, instead of the "per-boot"
configuration[4].
This is a problem not limited to new ssh keys, because users could
generate VM templates with specific actions to be triggered only with
a new VM/instance.

I propose you for future releases using only user explicit setup
options related with cloudinit setup (name, sshkeys, cipassword), I
mean explicit and ignore default values.
So in case of future changes as the referred commit there will not be
new instance-id as the user does not generate explicitly new cloudinit
source of info, then no new instance, no surprises for VM
administrator.

I am far away from being a Perl  developer, but I can put my best
effort during my spare time.
Anyway I would like to know what you think since what I am proposing
changes the current behaviour of PVE cloudinit, maybe these changes
could be part of PVE 9?

Thanks for your attention,

Regards,

Matias Pecchia

[0]: 
https://git.proxmox.com/?p=qemu-server.git;a=commitdiff;h=3e546c5ada47da8434bb58d27a3aa7d9823e7fa4
[1]: 
https://git.proxmox.com/?p=qemu-server.git;a=blob;f=PVE/QemuServer/Cloudinit.pm;h=abc6b1421b38c67f3de46ea075d5f8ac2fe599ef;hb=1c5001c2e7f8b73cdcf192d23714985eaddc17ed#l497
[2]: 
https://git.proxmox.com/?p=qemu-server.git;a=blob;f=PVE/QemuServer/Cloudinit.pm;h=abc6b1421b38c67f3de46ea075d5f8ac2fe599ef;hb=1c5001c2e7f8b73cdcf192d23714985eaddc17ed#l481
[3]: 
https://git.proxmox.com/?p=qemu-server.git;a=blob;f=PVE/QemuServer/Cloudinit.pm;h=abc6b1421b38c67f3de46ea075d5f8ac2fe599ef;hb=1c5001c2e7f8b73cdcf192d23714985eaddc17ed#l476
[4]: 
https://cloudinit.readthedocs.io/en/latest/explanation/boot.html#first-boot-determination


-- 
 .::MAbeeTT::.

 mabeett [at] gmail [ dot] com

--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] cloudinit: RFC proposal for unwanted and unexpected regeneration of instance-id

2024-07-16 Thread Mira Limbeck
Hi Matias,

Thank you for providing this detailed description of the issue!

We have an open issue in our bug tracker [0]. If it's alright with you
I'd add your text as-is to the bug tracker as a comment for additional
information/reasoning on why that change would be needed.

Feel free to add yourself to the CC list if you want to be notified on
updates.

> I am far away from being a Perl  developer, but I can put my best
> effort during my spare time.
> Anyway I would like to know what you think since what I am proposing
> changes the current behaviour of PVE cloudinit, maybe these changes
> could be part of PVE 9?

I already started working on an implementation for a fixed instance-id.
I can't give an ETA when the first draft will be sent to the pve-devel
list though.



[0] https://bugzilla.proxmox.com/show_bug.cgi?id=1739


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] cloudinit: RFC proposal for unwanted and unexpected regeneration of instance-id

2024-07-16 Thread MAbeeTT via pve-devel
--- Begin Message ---
On Tue, Jul 16, 2024 at 4:49 PM Mira Limbeck  wrote:
>
> Hi Matias,
>
> Thank you for providing this detailed description of the issue!

Hello Mira
thanks for your quick response!
>From my side I offer all my available energy in order to help.


> We have an open issue in our bug tracker [0]. If it's alright with you
> I'd add your text as-is to the bug tracker as a comment for additional
> information/reasoning on why that change would be needed.

Sure, feel free to edit in order to clarify.
Just one clarification:
Current cloud-init documentation from my first message determines[0] clearly

> >  Alternatively, the filesystem has been attached to a new instance, and 
> > this is the instance’s first boot.
> > The most obvious case where this happens is when an instance is launched 
> > from an image captured from a launched instance.
> > By default, cloud-init attempts to determine which case it is running in by 
> > checking the instance ID in the cache against the instance ID it determines 
> > at runtime.
> > If they do not match, then this is an instance’s first boot;

So It is not a matter of a particular Distro/OS version/release. It is
a «a feature» from upstream cloud-init implementation.

My corner case is just a bit different, let say "Do not update
instance-id without user new VM config [subset cinit-related]"
Anyway your work could be an answer for my scenario.

Let me know if this is clear enough.

>
> Feel free to add yourself to the CC list if you want to be notified on
> updates.

Done, thanks

>
> > I am far away from being a Perl  developer, but I can put my best
> > effort during my spare time.
> > Anyway I would like to know what you think since what I am proposing
> > changes the current behaviour of PVE cloudinit, maybe these changes
> > could be part of PVE 9?
>
> I already started working on an implementation for a fixed instance-id.
> I can't give an ETA when the first draft will be sent to the pve-devel
> list though.

Great, ping me if you need feedback.

kind regards,
mts


[0] 
https://cloudinit.readthedocs.io/en/latest/explanation/boot.html#first-boot-determination
--
 .::MAbeeTT::.

 mabeett [at] gmail [ dot] com

--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: [PATCH installer v4 0/4] add check/rename for already-existing ZFS rpool

2024-07-16 Thread Thomas Lamprecht
Am 16/07/2024 um 10:18 schrieb Christoph Heiss:
> Pretty straight forward overall, implements a check for an existing
> `rpool` on the system and ask the user whether they would like to rename
> it, much in the same way as it works for VGs already.
> 
> Without this, the installer would silently create a second (and thus
> conflicting) `rpool` and cause a boot failure after the installation,
> since it does not know which pool to import exactly.
> 
> v1: https://lists.proxmox.com/pipermail/pve-devel/2024-May/063874.html
> v2: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064619.html
> v3: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064635.html
> 
> Notable changes v3 -> v4:
>   * rebased on latest master
>   * rename $response_ok -> $do_rename for clarity, as suggested by Aaron
> 
> Notable changes v2 -> v3:
>   * make low-level option lvm_auto_rename more generic
>   * skip rename prompt in auto-install mode
> 
> Notable changes v1 -> v2:
>   * incorporated Aarons suggestions from v1
>   * rewrote tests to use a pre-defined input instead
>   * moved pool renaming to own subroutine
>   * documented all new subroutines
>   * split out tests into own patch
> 
> Christoph Heiss (4):
>   proxmox: add zfs module for retrieving importable zpool info
>   test: add test cases for new zfs module
>   install: config: rename option lvm_auto_rename ->
> existing_storage_auto_rename
>   low-level: install: check for already-existing `rpool` on install
> 
>  Proxmox/Install.pm|  41 ++-
>  Proxmox/Install/Config.pm |   6 +-
>  Proxmox/Makefile  |   1 +
>  Proxmox/Sys/ZFS.pm| 109 ++
>  proxmox-auto-installer/src/utils.rs   |   2 +-
>  .../resources/parse_answer/disk_match.json|   2 +-
>  .../parse_answer/disk_match_all.json  |   2 +-
>  .../parse_answer/disk_match_any.json  |   2 +-
>  .../tests/resources/parse_answer/minimal.json |   2 +-
>  .../resources/parse_answer/nic_matching.json  |   2 +-
>  .../resources/parse_answer/specific_nic.json  |   2 +-
>  .../tests/resources/parse_answer/zfs.json |   2 +-
>  proxmox-installer-common/src/setup.rs |   2 +-
>  proxmox-tui-installer/src/setup.rs|   2 +-
>  test/Makefile |   7 +-
>  test/zfs-get-pool-list.pl |  57 +
>  16 files changed, 223 insertions(+), 18 deletions(-)
>  create mode 100644 Proxmox/Sys/ZFS.pm
>  create mode 100755 test/zfs-get-pool-list.pl
> 


applied series, thanks!

But I took the liberty to squash in the second patch adding tests into the
first one, while doing preparatory work can sometimes be OK, I'd not split
that up too much. Also mentioned for what this preparatory work is planned
to be used, as such context can be quite helpful when checking out single
commits that don't do anything on their own.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH proxmox-ve-rs 01/21] debian: add files for packaging

2024-07-16 Thread Thomas Lamprecht
Am 27/06/2024 um 12:41 schrieb Gabriel Goller:
> On 26.06.2024 14:15, Stefan Hanreich wrote:
>> Since we now have a standalone repository for Proxmox VE related
>> crates, add the required files for packaging the crates contained in
>> this repository.
> 
> I know we don't really do this, but could we add a README.rst file here?
> Maybe with a small outline on what this repo contains, who uses it etc.
> 
> 

that'd be fine by me, but please use a README.md, i.e. markdown.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] www: utils: fix inconsistency in host cpu usage display in search view

2024-07-16 Thread Thomas Lamprecht
Am 16/07/2024 um 11:31 schrieb Christoph Heiss:
> Between the number of CPUs and the actual label, a space was missing -
> resulting in an inconsistency vs. the "CPU usage" column.
> 
> Also, fix a rather nonsensical check for `maxcpu` above - noticed that
> while comparing the implementation to that of Proxmox.Utils.render_cpu().

can we split this in a different patch? it's rather unrelated.

Also I think the error here was the lacking parenthesis, i.e., the
following minimal change would make the check also correct

if (!(Ext.isNumeric(maxcpu) && maxcpu >= 1)) {

But I still like yours more, just wanted to point out that this was
probably a simple typo or incompletely moving from one variant to
the other, not straight out bogus in intend.

> 
> Signed-off-by: Christoph Heiss 
> ---
>  www/manager6/Utils.js | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
> index f5608944d..6a0ecc98f 100644
> --- a/www/manager6/Utils.js
> +++ b/www/manager6/Utils.js
> @@ -1073,13 +1073,14 @@ Ext.define('PVE.Utils', {
>   }
>   var maxcpu = node.data.maxcpu || 1;
>  
> - if (!Ext.isNumeric(maxcpu) && (maxcpu >= 1)) {
> + if (!Ext.isNumeric(maxcpu) || maxcpu < 1) {
>   return '';
>   }
>  
>   var per = (record.data.cpu/maxcpu) * record.data.maxcpu * 100;
> + const cpu_label = maxcpu > 1 ? 'CPUs' : 'CPU';
>  
> - return per.toFixed(1) + '% of ' + maxcpu.toString() + (maxcpu > 1 ? 
> 'CPUs' : 'CPU');
> + return `${per.toFixed(1)}% of ${maxcpu} ${cpu_label}`;
>  },
>  
>  render_bandwidth: function(value) {



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer 00/14] fix #5536: implement post-(auto-)installation notification mechanism

2024-07-16 Thread Thomas Lamprecht
Am 15/07/2024 um 16:31 schrieb Christoph Heiss:
> With that in mind it definitely could come in handy. Or maybe a separate
> object "disks"/"other-disks"/etc. entirely? So as not have to filter out
> the (non-)bootdisks again on the receiving end.

Could be fine too, albeit I'd slightly prefer a single disk property, as it
feels more naturally to me to filter on properties compared to checking, or
having to combine, different array sets. But not too hard feelings here,
maybe someone else got some (stronger) opinion.

> While at it, the same would IMO make sense for NICs too, since one might
> want to set up additional network devices too.

good point.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH access-control] api: ACL update: fix handling of Permissions.Modify

2024-07-16 Thread Thomas Lamprecht
Am 11/07/2024 um 13:44 schrieb Fabian Grünbichler:
> with 8.x, the scope of non-"Permissions.Modify"-based ACL update privileges
> were reduced (so that users with for example, VM.Allocate on a VM could only
> delegate their own privileges, but not arbitrary other ones). that additional
> logic had a wrong guard and was accidentally triggered for calls where the 
> user
> had the "Permissions.Modify" privilege on the modified ACL path, but without
> propagation set.
> 
> a user with "Permissions.Modify" on a path should be able to set arbitrary
> ACLs for that path, even without propagation.
> 
> reported on the forum:
> 
> https://forum.proxmox.com/threads/privilege-permissions-modify-on-pool-will-not-propagade-to-contained-vms-anymore.151032/

Could be:

Reported on the forum: https://forum.proxmox.com/threads/151032/

> 
> Fixes: 46bfd59dfca655b263d1f905be37d985416717ac ("acls: restrict 
> less-privileged ACL modifications")
> 

please no extra newlines between trailers like Fixes or your S-o-b.

> Signed-off-by: Fabian Grünbichler 
> ---
>  src/PVE/API2/ACL.pm | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
>

applied, with above commit message nits addressed and reflowed to <= 70 cc,
thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel