[pve-devel] [PATCH docs] vzdump: mention file-restore log file

2023-09-05 Thread Fabian Grünbichler
it was only documented in the proxmox-backup source code so far.

Signed-off-by: Fabian Grünbichler 
---
grepped once too often for that one ;)

 vzdump.adoc | 5 +
 1 file changed, 5 insertions(+)

diff --git a/vzdump.adoc b/vzdump.adoc
index a7c3d1e..85b7cc2 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -469,6 +469,11 @@ downloaded from such an archive is inherently safe, but it 
avoids exposing the
 hypervisor system to danger. The VM will stop itself after a timeout. This
 entire process happens transparently from a user's point of view.
 
+NOTE: For troubleshooting purposes, each temporary VM instance generates a log
+file in `/var/log/proxmox-backup/file-restore/`. The log file might contain
+additional information in case an attempt to restore individual files or
+accessing file systems contained in a backup archive fails.
+
 [[vzdump_configuration]]
 Configuration
 -
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] drive: Fix typo in description of efitype

2023-09-05 Thread Filip Schauer
Signed-off-by: Filip Schauer 
---
 PVE/QemuServer/Drive.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index b0e0a96..e24ba12 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -319,7 +319,7 @@ my %efitype_fmt = (
enum => [qw(2m 4m)],
description => "Size and type of the OVMF EFI vars. '4m' is newer and 
recommended,"
. " and required for Secure Boot. For backwards compatibility, '2m' 
is used"
-   . " if not otherwise specified. Ignored for VMs with arch=aarc64 
(ARM).",
+   . " if not otherwise specified. Ignored for VMs with arch=aarch64 
(ARM).",
optional => 1,
default => '2m',
 },
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] fix #4808: ceph: use setting names with underscores

2023-09-05 Thread Maximiliano Sandoval
As suggested in [1], it is recommended to use `_` in all cases when
dealing with config files.

[1] https://docs.ceph.com/en/reef/rados/configuration/ceph-conf/#option-names

Signed-off-by: Maximiliano Sandoval 
---
 PVE/API2/Ceph/MDS.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Ceph/MDS.pm b/PVE/API2/Ceph/MDS.pm
index 1cb0b74f..6fc0ae45 100644
--- a/PVE/API2/Ceph/MDS.pm
+++ b/PVE/API2/Ceph/MDS.pm
@@ -153,10 +153,10 @@ __PACKAGE__->register_method ({
}
 
$cfg->{$section}->{host} = $nodename;
-   $cfg->{$section}->{"mds standby for name"} = 'pve';
+   $cfg->{$section}->{'mds_standby_for_name'} = 'pve';
 
if ($param->{hotstandby}) {
-   $cfg->{$section}->{"mds standby replay"} = 'true';
+   $cfg->{$section}->{'mds_standby_replay'} = 'true';
}
 
cfs_write_file('ceph.conf', $cfg);
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC] PVE-Backup: create jobs in a drained section

2023-09-05 Thread Fiona Ebner
Am 08.02.23 um 15:07 schrieb Fiona Ebner:
> 
> Inserting the copy-before-write node is already protected with a
> drained section, which is why there should be no actual issue right
> now. While that drained section does not extend until the bcs bitmap
> initialization, it should also not be an issue currently, because the
> job is not created from a coroutine (and even if, there would need to
> be a yield point in between).
> 

My explanation why it's currently not required is wrong. Because there
can be an IO thread which can interact with the bitmap and that is what
happens (with my reproducer) for the crash reported here [0]. I couldn't
reproduce it anymore with this RFC applied.

A sketch of what happens is:

Notes:
* Each time a block-copy request is created, it resets the dirty bitmap
at the corresponding range.
* Thread 1 is still doing the backup setup, cbw_open() and
backup_init_bcs_bitmap() happen in backup_job_create()
* The check if a request can be created for a given range relies on the
dirty bitmap.

  Thread 1 bdrv_dirty_bitmap_merge_internal() as part of cbw_open()
A Thread 3 bdrv_reset_dirty_bitmap(offset=x, bytes=4MiB)
  Thread 1 bdrv_clear_dirty_bitmap() as part of backup_init_bcs_bitmap()
  Thread 1 bdrv_dirty_bitmap_merge_internal() as part of
   backup_init_bcs_bitmap()
B
  Thread 3 bdrv_reset_dirty_bitmap(offset=0, bytes=4MiB)
  Thread 3 bdrv_reset_dirty_bitmap(offset=4MiB, bytes=4MiB)
  
C Thread 3 bdrv_reset_dirty_bitmap(offset=x, bytes=4MiB)

Note that at time B, there can be a mismatch between the bitmap and the
request list, if merging didn't by chance set the bits at the location
for request A again.

Then, if C happens before A has finished, an assert will trigger,
because there already is a request in the same range:

> block_copy_task_create: Assertion `!reqlist_find_conflict(&s->reqs, offset, 
> bytes)' failed
C doesn't need to cover exactly the same range as A of course, just
overlap, but it did for me.

That said, I also tried reproducing the issue with QEMU 7.2, but didn't
manage to yet. I'll take another look and then re-send this for QEMU 8.0
with a fixed commit message.

[0]: https://forum.proxmox.com/threads/133149/


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer 4/6] makefile: fix handling of multiple usr_bin files

2023-09-05 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
this fix should apply in any case and is unrelated to the series, but I
discovered it while adding the proxmox-auto-installer binary.

 Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index dc180b2..514845a 100644
--- a/Makefile
+++ b/Makefile
@@ -93,7 +93,7 @@ install: $(INSTALLER_SOURCES) 
$(CARGO_COMPILEDIR)/proxmox-tui-installer
install -D -m 755 unconfigured.sh $(DESTDIR)/sbin/unconfigured.sh
install -D -m 755 proxinstall $(DESTDIR)/usr/bin/proxinstall
install -D -m 755 proxmox-low-level-installer 
$(DESTDIR)/$(BINDIR)/proxmox-low-level-installer
-   $(foreach i,$(USR_BIN), install -m755 $(CARGO_COMPILEDIR)/$(i) 
$(DESTDIR)$(BINDIR)/)
+   $(foreach i,$(USR_BIN), install -m755 $(CARGO_COMPILEDIR)/$(i) 
$(DESTDIR)$(BINDIR)/ ;)
install -D -m 755 checktime $(DESTDIR)/usr/bin/checktime
install -D -m 644 xinitrc $(DESTDIR)/.xinitrc
install -D -m 755 spice-vdagent.sh $(DESTDIR)/.spice-vdagent.sh
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [RFC installer 3/6] add answer file fetch script

2023-09-05 Thread Aaron Lauterer
With the auto installer present, the crucial question is how we get the
answer file. This script implements the way of a local disk/partition
present, labelled 'proxmoxinst', lower or upper case, with the
'answer.toml' file in the root directory.

We either want to use it directly and call it from 'unconfigured.sh' or
see it as a first approach to showcase how it could be done.

Signed-off-by: Aaron Lauterer 
---
 start_autoinstall.sh | 50 
 1 file changed, 50 insertions(+)
 create mode 100755 start_autoinstall.sh

diff --git a/start_autoinstall.sh b/start_autoinstall.sh
new file mode 100755
index 000..081b865
--- /dev/null
+++ b/start_autoinstall.sh
@@ -0,0 +1,50 @@
+#!/bin/bash
+
+answer_file=answer.toml;
+answer_mp=/tmp/answer;
+answer_location="";
+mount_source="";
+label="proxmoxinst";
+
+mount_answer() {
+echo "mounting answer filesystem"
+mkdir -p $answer_mp
+mount "$mount_source" "$answer_mp"
+}
+
+find_fs() {
+search_path="/dev/disk/by-label/";
+if [[ -e ${search_path}/${label,,} ]]; then
+   mount_source="${search_path}/${label,,}";
+elif [[ -e ${search_path}/${label^^} ]]; then
+   mount_source="${search_path}/${label^^}";
+else
+   echo "No partition for answer file found!";
+   return 1;
+fi
+mount_answer;
+}
+
+find_answer_file() {
+if [ -e $answer_mp/$answer_file ]; then
+   cp $answer_mp/$answer_file /run/proxmox-installer/answer.toml
+   answer_location=/run/proxmox-installer/answer.toml
+   umount $answer_mp;
+else
+   return 1;
+fi
+}
+
+start_installation() {
+echo "calling 'proxmox-auto-installer'";
+proxmox-auto-installer < $answer_location;
+}
+
+
+if find_fs && find_answer_file; then
+echo "found answer file on local device";
+start_installation;
+else
+echo "Could not retrieve answer file. Aborting installation!"
+exit 1;
+fi
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [RFC installer 1/6] low level: sys: fetch udev properties

2023-09-05 Thread Aaron Lauterer
Fetch UDEV device properties (prepended with E:) for NICs and disks and
store them in their own JSON file so that we can use them for filtering.

Signed-off-by: Aaron Lauterer 
---
 Proxmox/Makefile|  1 +
 Proxmox/Sys/Udev.pm | 54 +
 proxmox-low-level-installer | 14 ++
 3 files changed, 69 insertions(+)
 create mode 100644 Proxmox/Sys/Udev.pm

diff --git a/Proxmox/Makefile b/Proxmox/Makefile
index d49da80..9561d9b 100644
--- a/Proxmox/Makefile
+++ b/Proxmox/Makefile
@@ -16,6 +16,7 @@ PERL_MODULES=\
 Sys/Command.pm \
 Sys/File.pm \
 Sys/Net.pm \
+Sys/Udev.pm \
 UI.pm \
 UI/Base.pm \
 UI/Gtk3.pm \
diff --git a/Proxmox/Sys/Udev.pm b/Proxmox/Sys/Udev.pm
new file mode 100644
index 000..69d674f
--- /dev/null
+++ b/Proxmox/Sys/Udev.pm
@@ -0,0 +1,54 @@
+package Proxmox::Sys::Udev;
+
+use strict;
+use warnings;
+
+use base qw(Exporter);
+our @EXPORT_OK = qw(disk_details);
+
+my $udev_regex = '^E: ([A-Z_]*)=(.*)$';
+
+my sub fetch_udevadm_info {
+my ($path) = @_;
+
+my $info = `udevadm info --path $path --query all`;
+if (!$info) {
+   warn "no details found for device '${path}'\n";
+   next;
+}
+my $details = {};
+for my $line (split('\n', $info)) {
+   if ($line =~ m/$udev_regex/) {
+   $details->{$1} = $2;
+   }
+}
+return $details;
+}
+
+# return hash of E: properties returned by udevadm
+sub disk_details {
+my $result = {};
+for my $data (@{Proxmox::Sys::Block::get_cached_disks()}) {
+   my $index = @$data[0];
+   my $bd = @$data[5];
+   $result->{$index} = fetch_udevadm_info($bd);
+}
+return $result;
+}
+
+
+sub nic_details {
+my $nic_path = "/sys/class/net";
+my $result = {};
+
+my $nics = Proxmox::Sys::Net::get_ip_config()->{ifaces};
+
+for my $index (keys %$nics) {
+   my $name = $nics->{$index}->{name};
+   my $nic = "${nic_path}/${name}";
+   $result->{$name} = fetch_udevadm_info($nic);
+}
+return $result;
+}
+
+1;
diff --git a/proxmox-low-level-installer b/proxmox-low-level-installer
index 814961e..99d5b9a 100755
--- a/proxmox-low-level-installer
+++ b/proxmox-low-level-installer
@@ -23,14 +23,17 @@ use Proxmox::Install::ISOEnv;
 use Proxmox::Install::RunEnv;
 
 use Proxmox::Sys::File qw(file_write_all);
+use Proxmox::Sys::Udev;
 
 use Proxmox::Log;
 use Proxmox::Install;
 use Proxmox::Install::Config;
 use Proxmox::UI;
 
+
 my $commands = {
 'dump-env' => 'Dump the current ISO and Hardware environment to base the 
installer UI on.',
+'dump-udev' => 'Dump disk and network device info. Used for the auto 
installation.',
 'start-session' => 'Start an installation session, with command and result 
transmitted via stdin/out',
 'start-session-test' => 'Start an installation TEST session, with command 
and result transmitted via stdin/out',
 'help' => 'Output this usage help.',
@@ -85,6 +88,17 @@ if ($cmd eq 'dump-env') {
 my $run_env = Proxmox::Install::RunEnv::query_installation_environment();
 my $run_env_serialized = to_json($run_env, {canonical => 1, utf8 => 1}) 
."\n";
 file_write_all($run_env_file, $run_env_serialized);
+} elsif ($cmd eq 'dump-udev') {
+my $out_dir = $env->{locations}->{run};
+make_path($out_dir);
+die "failed to create output directory '$out_dir'\n" if !-d $out_dir;
+
+my $output = {};
+$output->{disks} = Proxmox::Sys::Udev::disk_details();
+$output->{nics} = Proxmox::Sys::Udev::nic_details();
+
+my $output_serialized = to_json($output, {canonical => 1, utf8 => 1}) 
."\n";
+file_write_all("$out_dir/run-env-udev.json", $output_serialized);
 } elsif ($cmd eq 'start-session') {
 Proxmox::UI::init_stdio({}, $env);
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [RFC installer 5/6] makefile: add auto installer

2023-09-05 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
 Makefile | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index 514845a..15cdc14 100644
--- a/Makefile
+++ b/Makefile
@@ -18,7 +18,9 @@ INSTALLER_SOURCES=$(shell git ls-files) country.dat
 
 PREFIX = /usr
 BINDIR = $(PREFIX)/bin
-USR_BIN := proxmox-tui-installer
+USR_BIN :=\
+ proxmox-tui-installer \
+ proxmox-auto-installer
 
 COMPILED_BINS := \
$(addprefix $(CARGO_COMPILEDIR)/,$(USR_BIN))
@@ -43,6 +45,7 @@ $(BUILDDIR):
  proxinstall \
  proxmox-low-level-installer \
  proxmox-tui-installer/ \
+ proxmox-auto-installer/ \
  spice-vdagent.sh \
  unconfigured.sh \
  xinitrc \
@@ -103,6 +106,7 @@ $(COMPILED_BINS): cargo-build
 .PHONY: cargo-build
 cargo-build:
$(CARGO) build --package proxmox-tui-installer --bin 
proxmox-tui-installer $(CARGO_BUILD_ARGS)
+   $(CARGO) build --package proxmox-auto-installer --bin 
proxmox-auto-installer $(CARGO_BUILD_ARGS)
 
 %-banner.png: %-banner.svg
rsvg-convert -o $@ $<
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [RFC docs 6/6] installation: add unattended documentation

2023-09-05 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
 pve-installation.adoc | 245 ++
 1 file changed, 245 insertions(+)

diff --git a/pve-installation.adoc b/pve-installation.adoc
index aa4e4c9..9011d09 100644
--- a/pve-installation.adoc
+++ b/pve-installation.adoc
@@ -298,6 +298,251 @@ following command:
 # zpool add  log 
 
 
+[[installation_auto]]
+Unattended Installation
+---
+
+// TODO: rework once it is clearer how the process actually works
+
+The unattended installation can help to automate the installation process from
+the very beginning. It needs the dedicated ISO image for unattended
+installations.
+
+The options that the regular installer would ask for, need to be provided in an
+answer file. The answer file can be placed on a USB flash drive. The volume
+needs to be labeled 'PROXMOXINST' and needs to contain the answer file named
+'answer.toml'.
+
+The answer file allows for fuzzy matching to select the network card and disks
+used for the installation.
+
+[[installation_auto_answer_file]]
+Answer file
+~~~
+
+The answer file is expected in `TOML` format. The following example shows an
+answer file that uses the DHCP provided network settings. It will use a ZFS
+Raid 10 with an 'ashift' of '12' and will use all Micron disks it can find.
+
+
+[global]
+keyboard = "de"
+country = "at"
+fqdn = "pve-1.example.com"
+mailto = "m...@example.com"
+timezone = "Europe/Vienna"
+password = "123456"
+
+[network]
+use_dhcp = true
+
+[disks]
+filesystem = "zfs-raid10"
+zfs.ashift = 12
+filter.ID_SERIAL = "Micron_*"
+
+
+Global Section
+^^
+
+This section contains the following keys:
+
+`keyboard`:: The keyboard layout. The following options are possible:
+*   `de`
+*   `de-ch`
+*   `dk`
+*   `en-gb`
+*   `en-us`
+*   `es`
+*   `fi`
+*   `fr`
+*   `fr-be`
+*   `fr-ca`
+*   `fr-ch`
+*   `hu`
+*   `is`
+*   `it`
+*   `jp`
+*   `lt`
+*   `mk`
+*   `nl`
+*   `no`
+*   `pl`
+*   `pt`
+*   `pt-br`
+*   `se`
+*   `si`
+*   `tr`
+
+`country`:: The country code in the two letter variant. For example `at`, `us`,
+or `fr`.
+
+`fqdn`:: The fully qualified domain of the host. The domain part will be used
+as the search domain.
+
+`mailto`:: The default email address. Used for notifications.
+
+`timezone`:: The timezone in `tzdata` format. For example `Europe/Vienna` or
+`America/New_York`.
+
+`password`:: The password for the `root` user.
+
+`pre_command`:: A list of commands to run prior to the installation.
+
+`post_command`:: A list of commands run after the installation.
+
+TODO: explain commands and list of available useful CLI tools in the iso
+
+Network Section
+^^^
+
+`use_dhcp`:: Set to `true` if the IP configuration received by DHCP should be
+used.
+
+`cidr`:: IP address in CIDR notation. For example `192.168.1.10/24`.
+
+`dns`:: IP address of the DNS server.
+
+`gateway`:: IP address of the default gateway.
+
+`filter`:: Filter against `UDEV` properties to select the network card. See
+xref:installation_auto_filter[Filters].
+
+
+Disks Section
+^
+
+`filesystem`:: The file system used for the installation. The options are:
+*`ext4`
+*`xfs`
+*`zfs-raid0`
+*`zfs-raid1`
+*`zfs-raid10`
+*`zfs-raidz1`
+*`zfs-raidz2`
+*`zfs-raidz3`
+*`btrfs-raid0`
+*`btrfs-raid1`
+*`btrfs-raid10`
+
+`disk_selection`:: List of disks to use. Useful if you are sure about the disk
+names. For example:
+
+disk_selection = ["sda", "sdb"]
+
+
+`filter_match`:: Can be `any` or `all`. Decides if a match of any filter is
+enough or if all filters need to match for a disk to be selected. Default is 
`any`.
+
+`filter`:: Filter against `UDEV` properties to select disks to install to. See
+xref:installation_auto_filter[Filters]. Filters won't be used if
+`disk_selection` is configured.
+
+`zfs`:: ZFS specific properties. See xref:advanced_zfs_options[Advanced ZFS 
Configuration Options]
+for more details. The properties are:
+* `ashift`
+* `checksum`
+* `compress`
+* `copies`
+* `hdsize`
+
+`lvm`:: Advanced properties that can be used when `ext4` or `xfs` is used as 
`filesystem`.
+See xref:advanced_lvm_options[Advanced LVM Configuration Options] for more 
details. The properties are:
+* `hdsize`
+* `swapsize`
+* `maxroot`
+* `maxvz`
+* `minfree`
+
+`btrfs`:: BTRFS specific settings. Currently there is only `hdsize`.
+
+[[installation_auto_filter]]
+Filters
+~~~
+
+Filters allow you to match against device properties exposed by `udevadm`. You
+can see them if you run the following commands. The first is for a disk, the
+second for a network card.
+
+udevadm info /sys/block/{disk name}
+udevadm info /sys/class/net/{NIC name}
+
+
+For example:
+
+
+# udevadm info -p /sys/class/net/enp129s0f0np0 | grep "E:"
+E: DEVPATH=/devices/pci:80/:80:01.1/:81:00.0/net/enp129s0f0np0
+E: SUBSYSTEM=net
+E: INTERFACE=enp129s0f0np0
+E: IFINDEX=6

[pve-devel] [RFC installer 0/6] add automated installation

2023-09-05 Thread Aaron Lauterer
This is the first iteration of making it possible to automatically run
the installer.

The main idea is to provide an answer file (TOML as of now) that
provides the properties usually queried by the TUI or GUI installer.
Additionally we want to be able to do some more fuzzy matching for the
NIC and disks used by the installer.

Therefore we have some basic globbing/wildcard support at the start and
end of the search string. For now the UDEV device properties are used to
for this matching. The format for the filters are "UDEV KEY -> search
string".

The answer file and auto installer have the additional option to run
commands pre- and post-installation. More details can be found in the
patch itself.

The big question is how we actually get the answer file without creating
a custom installer image per answer file.
The most basic variant is to scan local storage for a partition/volume
with an expected label, mount it and look for the expected file.
For this I added a small shell script that does exactly this and then
starts the auto installer.

Another idea is to get an URL and query it
* could come from a default subdomain that is queried within the DHCP
  provided search domain
* We could get it from a DHCP option. How we extract that is something I
  don't know at this time.
* From the kernel cmdline, if the installer is booted via PXE with
  customized kernel parameters
* ...

When running the http request, we could add identifying properties as
parameters. Properties like MAC addresses and serial numbers, for
example. This would make it possible to have a script querying an
internal database and create the answer file on demand.

This version definitely has some rough edges and probably a lot of
things that could be done nicer, more idiomatic. There are quite a few
nested for loops that could probably be done better/nicer as well.

A lot of code has been reused from the TUI installer. The plan is to
factor out common code into a new library crate.

For now, the auto installer just prints everything to stdout. We could
implement a simple GUI that shows a progress bar.


pve-install: Aaron Lauterer (5):
  low level: sys: fetch udev properties
  add proxmox-auto-installer
  add answer file fetch script
  makefile: fix handling of multiple usr_bin files
  makefile: add auto installer

 Cargo.toml|   1 +
 Makefile  |   8 +-
 Proxmox/Makefile  |   1 +
 Proxmox/Sys/Udev.pm   |  54 +++
 proxmox-auto-installer/Cargo.toml |  13 +
 proxmox-auto-installer/answer.toml|  36 ++
 .../resources/test/iso-info.json  |   1 +
 .../resources/test/locales.json   |   1 +
 .../test/parse_answer/disk_match.json |  28 ++
 .../test/parse_answer/disk_match.toml |  14 +
 .../test/parse_answer/disk_match_all.json |  25 +
 .../test/parse_answer/disk_match_all.toml |  16 +
 .../test/parse_answer/disk_match_any.json |  32 ++
 .../test/parse_answer/disk_match_any.toml |  16 +
 .../resources/test/parse_answer/minimal.json  |  17 +
 .../resources/test/parse_answer/minimal.toml  |  14 +
 .../test/parse_answer/nic_matching.json   |  17 +
 .../test/parse_answer/nic_matching.toml   |  19 +
 .../resources/test/parse_answer/readme|   4 +
 .../test/parse_answer/specific_nic.json   |  17 +
 .../test/parse_answer/specific_nic.toml   |  19 +
 .../resources/test/parse_answer/zfs.json  |  26 +
 .../resources/test/parse_answer/zfs.toml  |  19 +
 .../resources/test/run-env-info.json  |   1 +
 .../resources/test/run-env-udev.json  |   1 +
 proxmox-auto-installer/src/answer.rs  | 144 ++
 proxmox-auto-installer/src/main.rs| 412 
 proxmox-auto-installer/src/tui/mod.rs |   3 +
 proxmox-auto-installer/src/tui/options.rs | 302 
 proxmox-auto-installer/src/tui/setup.rs   | 447 ++
 proxmox-auto-installer/src/tui/utils.rs   | 268 +++
 proxmox-auto-installer/src/udevinfo.rs|   9 +
 proxmox-auto-installer/src/utils.rs   | 325 +
 proxmox-low-level-installer   |  14 +
 start_autoinstall.sh  |  50 ++
 35 files changed, 2372 insertions(+), 2 deletions(-)
 create mode 100644 Proxmox/Sys/Udev.pm
 create mode 100644 proxmox-auto-installer/Cargo.toml
 create mode 100644 proxmox-auto-installer/answer.toml
 create mode 100644 proxmox-auto-installer/resources/test/iso-info.json
 create mode 100644 proxmox-auto-installer/resources/test/locales.json
 create mode 100644 
proxmox-auto-installer/resources/test/parse_answer/disk_match.json
 create mode 100644 
proxmox-auto-installer/resources/test/parse_answer/disk_match.toml
 create mode 100644 
proxmox-auto-installer/resources/test/parse_answer/disk_match_all.json
 create mode 100644 
proxmox-auto-installer/resources/test/

[pve-devel] [PATCH manager] ui: improve vm/container migration user experience

2023-09-05 Thread Philipp Hufnagl
After the implementation of fix #474, it has been suggested that
instead of requiring the user to click a checkbox allowing migration,
it should be allowed automatically and and a warning should be displayed

Further it has been discussed to rename the feature from "transfer" to
"migrate". However and API change would break already implemented usage
and so it has been decided to call it (for now) transfer everywhere to
avoid confusion

Signed-off-by: Philipp Hufnagl 
---
 www/manager6/grid/PoolMembers.js | 29 ++---
 1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/www/manager6/grid/PoolMembers.js b/www/manager6/grid/PoolMembers.js
index 224daca3..d6fa0278 100644
--- a/www/manager6/grid/PoolMembers.js
+++ b/www/manager6/grid/PoolMembers.js
@@ -35,6 +35,20 @@ Ext.define('PVE.pool.AddVM', {
],
});
 
+   let transferWarning = Ext.create('Ext.form.field.Display', {
+   userCls: 'pmx-hint',
+   value: gettext('One or more vms or container will be removed from 
their old pool'),
+   hidden: true,
+   });
+
+   let transfer = Ext.create('Ext.form.field.Checkbox', {
+   name: 'transfer',
+   boxLabel: gettext('Allow Transfer'),
+   inputValue: 1,
+   hidden: true,
+   value: 0,
+   });
+
var vmGrid = Ext.create('widget.grid', {
store: vmStore,
border: true,
@@ -46,9 +60,15 @@ Ext.define('PVE.pool.AddVM', {
listeners: {
selectionchange: function(model, selected, opts) {
var selectedVms = [];
+   var isTransfer = false;
selected.forEach(function(vm) {
selectedVms.push(vm.data.vmid);
+   if (vm.data.pool !== '') {
+   isTransfer = true;
+   }
});
+   transfer.setValue(isTransfer);
+   transferWarning.setHidden(!isTransfer);
vmsField.setValue(selectedVms);
},
},
@@ -90,15 +110,10 @@ Ext.define('PVE.pool.AddVM', {
],
});
 
-   let transfer = Ext.create('Ext.form.field.Checkbox', {
-   name: 'transfer',
-   boxLabel: gettext('Allow Transfer'),
-   inputValue: 1,
-   value: 0,
-   });
+
Ext.apply(me, {
subject: gettext('Virtual Machine'),
-   items: [vmsField, vmGrid, transfer],
+   items: [vmsField, vmGrid, transferWarning, transfer],
});
 
me.callParent();
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 pve-manager 2/2] ui: qemu : memoryedit: add new max && virtio fields

2023-09-05 Thread DERUMIER, Alexandre
> 
> The advantage with 'max' is that it can be used for both, hotplug
> with
> dimms and virtio-mem. Otherwise, we'd need two different sub-options
> depending on hotplug method.
> 
yes, that's what I thinked too, it could be great to have same api call
with same options, with or without virtio-mem.

(virtio-mem will be the default for new linux distro, but for windows
or older linux distro, we still need to use old dimm method)



My first idea for the gui, for the max value, was a combobox displaying
an hint with mem topology, something like:

max = 64GB : 64 x 1GB dimm
max = 128GB: 64 x 2GB dimm
...

(or maybe it could be a hint outside a simple integer field)




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 pve-manager 2/2] ui: qemu : memoryedit: add new max && virtio fields

2023-09-05 Thread Thomas Lamprecht
Am 05/09/2023 um 17:10 schrieb DERUMIER, Alexandre:
>>
>> The advantage with 'max' is that it can be used for both, hotplug
>> with
>> dimms and virtio-mem. Otherwise, we'd need two different sub-options
>> depending on hotplug method.
>>
> yes, that's what I thinked too, it could be great to have same api call
> with same options, with or without virtio-mem.
> 
> (virtio-mem will be the default for new linux distro, but for windows
> or older linux distro, we still need to use old dimm method)
> 
> 
> 
> My first idea for the gui, for the max value, was a combobox displaying
> an hint with mem topology, something like:
> 
> max = 64GB : 64 x 1GB dimm
> max = 128GB: 64 x 2GB dimm
> ...
> 
> (or maybe it could be a hint outside a simple integer field)
> 

We could still allow setting the DIMM size in the UI with a simple integer
field and a step size of 1 (GB) and then calculate the max from that?



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 pve-manager 2/2] ui: qemu : memoryedit: add new max && virtio fields

2023-09-05 Thread DERUMIER, Alexandre
Le mardi 05 septembre 2023 à 17:16 +0200, Thomas Lamprecht a écrit :
> Am 05/09/2023 um 17:10 schrieb DERUMIER, Alexandre:
> > > 
> > > The advantage with 'max' is that it can be used for both, hotplug
> > > with
> > > dimms and virtio-mem. Otherwise, we'd need two different sub-
> > > options
> > > depending on hotplug method.
> > > 
> > yes, that's what I thinked too, it could be great to have same api
> > call
> > with same options, with or without virtio-mem.
> > 
> > (virtio-mem will be the default for new linux distro, but for
> > windows
> > or older linux distro, we still need to use old dimm method)
> > 
> > 
> > 
> > My first idea for the gui, for the max value, was a combobox
> > displaying
> > an hint with mem topology, something like:
> > 
> > max = 64GB : 64 x 1GB dimm
> > max = 128GB: 64 x 2GB dimm
> > ...
> > 
> > (or maybe it could be a hint outside a simple integer field)
> > 
> 
> We could still allow setting the DIMM size in the UI with a simple
> integer
> field and a step size of 1 (GB) and then calculate the max from that?
> 
> 
yes, it could work too. Maybe a dimm size field, changing the max value
, and at the same time, changing the max value is changing the dimm
size field ?


and for virtio-mem, dimmsize can be replace by chunk size
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel