[pve-devel] [PATCH v10 qemu-server 4/7] image convert: allow block device as source

2022-01-13 Thread Fabian Ebner
Necessary to import from an existing storage using block-device
volumes like ZFS.

Signed-off-by: Dominic Jäger 
[split into its own patch]
Signed-off-by: Fabian Ebner 
---

No changes from v9 (except splitting the patch).

 PVE/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 98eb6b3..c18e1f6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7296,7 +7296,7 @@ sub qemu_img_convert {
$src_path = PVE::Storage::path($storecfg, $src_volid, $snapname);
$src_is_iscsi = ($src_path =~ m|^iscsi://|);
$cachemode = 'none' if $src_scfg->{type} eq 'zfspool';
-} elsif (-f $src_volid) {
+} elsif (-f $src_volid || -b $src_volid) {
$src_path = $src_volid;
if ($src_path =~ m/\.($PVE::QemuServer::Drive::QEMU_FORMAT_RE)$/) {
$src_format = $1;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC v10 qemu-server 7/7] api: create disks: factor out common part from if/else

2022-01-13 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---

New in v10.

 PVE/API2/Qemu.pm | 16 ++--
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 8c74ecc..fa6aa9c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -204,7 +204,7 @@ my $create_disks = sub {
my $src_size = PVE::Storage::file_size_info($source);
die "Could not get file size of $source" if !defined($src_size);
 
-   my (undef, $dst_volid) = PVE::QemuServer::ImportDisk::do_import(
+   (undef, $volid) = PVE::QemuServer::ImportDisk::do_import(
$source,
$vmid,
$storeid,
@@ -215,18 +215,13 @@ my $create_disks = sub {
},
);
 
-   push @$vollist, $dst_volid;
-   $disk->{file} = $dst_volid;
$disk->{size} = $src_size;
-   delete $disk->{format}; # no longer needed
-   $res->{$ds} = PVE::QemuServer::print_drive($disk);
} else {
my $defformat = PVE::Storage::storage_default_format($storecfg, 
$storeid);
my $fmt = $disk->{format} || $defformat;
 
$size = PVE::Tools::convert_size($size, 'gb' => 'kb'); # 
vdisk_alloc uses kb
 
-   my $volid;
if ($ds eq 'efidisk0') {
my $smm = 
PVE::QemuServer::Machine::machine_type_is_q35($conf);
($volid, $size) = PVE::QemuServer::create_efidisk(
@@ -238,12 +233,13 @@ my $create_disks = sub {
} else {
$volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, 
$vmid, $fmt, undef, $size);
}
-   push @$vollist, $volid;
-   $disk->{file} = $volid;
$disk->{size} = PVE::Tools::convert_size($size, 'kb' => 'b');
-   delete $disk->{format}; # no longer needed
-   $res->{$ds} = PVE::QemuServer::print_drive($disk);
}
+
+   push @$vollist, $volid;
+   $disk->{file} = $volid;
+   delete $disk->{format}; # no longer needed
+   $res->{$ds} = PVE::QemuServer::print_drive($disk);
} else {
 
PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, 
$vmid, $volid);
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [RFC v10 qemu-server 6/7] api: support VM disk import

2022-01-13 Thread Fabian Ebner
From: Dominic Jäger 

Extend qm importdisk functionality to the API.

Co-authored-by: Fabian Grünbichler 
Co-authored-by: Dominic Jäger 
Signed-off-by: Fabian Ebner 
---

Changes from v9:

* Instead of adding an import-sources parameter to the API, use a new
  import-from property for the disk, that's only available with
  import/alloc-enabled API endpoints via its own version of the schema

Avoids the split across regular drive key parameters and
'import_soruces', which avoids quite a bit of cross-checking
between the two and parsing/passing around the latter.

The big downsides are:
* Schema handling is a bit messy.
* Need to special case print_drive, because we do intermediate
  parse/print to cleanup drive paths. Seems not too easy to avoid
  without complicating things elsewehere.
* Using the import-aware parse_drive in parse_volume, because that
  is used via the foreach_volume iterators handling the parameters
  of the import-enabled endpoints. Could be avoided by using for
  loops instead.

Counter-arguments for using a single schema (citing Fabian G.):
* docs/schema dump/api docs: shouldn't look like you can put that
  everywhere where we use the config schema
* shouldn't have nasty side-effects if someone puts it into the
  config

* Don't iterate over unused disks in create_disks()

Would need to be its own patch and need to make sure everything
also works with respect to usual (i.e. non-import) new disk
creation, etc.

* Re-use do_import function

Rather than duplicating most of it. The down side is the need to
add a new parameter for skipping configuration update. But I
suppose the plan is to have qm import switch to the new API at
some point, and then do_import can be simplified.

* Drop format supported check

Instead rely on resolve_dst_disk_format (via do_import) to pick
the most appropriate format.

 PVE/API2/Qemu.pm | 86 +---
 PVE/QemuConfig.pm|  2 +-
 PVE/QemuServer/Drive.pm  | 32 +++---
 PVE/QemuServer/ImportDisk.pm |  2 +-
 4 files changed, 87 insertions(+), 35 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index e6a6cdc..8c74ecc 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -21,8 +21,9 @@ use PVE::ReplicationConfig;
 use PVE::GuestHelpers;
 use PVE::QemuConfig;
 use PVE::QemuServer;
-use PVE::QemuServer::Drive;
 use PVE::QemuServer::CPUConfig;
+use PVE::QemuServer::Drive;
+use PVE::QemuServer::ImportDisk;
 use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuServer::Machine;
 use PVE::QemuMigrate;
@@ -89,6 +90,10 @@ my $check_storage_access = sub {
} else {
PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, 
$vmid, $volid);
}
+
+   if (my $source_image = $drive->{'import-from'}) {
+   PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, 
$vmid, $source_image);
+   }
 });
 
$rpcenv->check($authuser, "/storage/$settings->{vmstatestorage}", 
['Datastore.AllocateSpace'])
@@ -162,6 +167,9 @@ my $create_disks = sub {
my $volid = $disk->{file};
my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid, 1);
 
+   die "'import-from' requires special volume ID - use :0,import-from=\n"
+   if $disk->{'import-from'} && $volid !~ $NEW_DISK_RE;
+
if (!$volid || $volid eq 'none' || $volid eq 'cdrom') {
delete $disk->{size};
$res->{$ds} = PVE::QemuServer::print_drive($disk);
@@ -190,28 +198,52 @@ my $create_disks = sub {
} elsif ($volid =~ $NEW_DISK_RE) {
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
-   my $defformat = PVE::Storage::storage_default_format($storecfg, 
$storeid);
-   my $fmt = $disk->{format} || $defformat;
-
-   $size = PVE::Tools::convert_size($size, 'gb' => 'kb'); # 
vdisk_alloc uses kb
-
-   my $volid;
-   if ($ds eq 'efidisk0') {
-   my $smm = PVE::QemuServer::Machine::machine_type_is_q35($conf);
-   ($volid, $size) = PVE::QemuServer::create_efidisk(
-   $storecfg, $storeid, $vmid, $fmt, $arch, $disk, $smm);
-   } elsif ($ds eq 'tpmstate0') {
-   # swtpm can only use raw volumes, and uses a fixed size
-   $size = 
PVE::Tools::convert_size(PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE, 'b' => 
'kb');
-   $volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, 
"raw", undef, $size);
+
+   if (my $source = delete $disk->{'import-from'}) {
+   $source = PVE::Storage::abs_filesystem_path($storecfg, $source, 
1);
+   my $src_size = PVE::Storage::file_size_info($source);
+   die "Could not get file size of $source" if !defined($src_size);
+
+   my (undef, $ds

[pve-devel] [PATCH v10 qemu-server 2/7] parse ovf: untaint path when calling file_size_info

2022-01-13 Thread Fabian Ebner
Prepare for calling parse_ovf via API, where the -T switch is used.

Signed-off-by: Fabian Ebner 
---

New in v10.

 PVE/QemuServer/OVF.pm | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuServer/OVF.pm b/PVE/QemuServer/OVF.pm
index 0376cbf..4a0d373 100644
--- a/PVE/QemuServer/OVF.pm
+++ b/PVE/QemuServer/OVF.pm
@@ -221,10 +221,11 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", 
$controller_id);
die "error parsing $filepath, file seems not to exist at 
$backing_file_path\n";
}
 
-   my $virtual_size;
-   if ( !($virtual_size = 
PVE::Storage::file_size_info($backing_file_path)) ) {
-   die "error parsing $backing_file_path, size seems to be 
$virtual_size\n";
-   }
+   my $virtual_size = PVE::Storage::file_size_info(
+   ($backing_file_path =~ m|^(/.*)|)[0] # untaint
+   );
+   die "error parsing $backing_file_path, cannot determine file size\n"
+   if !$virtual_size;
 
$pve_disk = {
disk_address => $pve_disk_address,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v10 qemu-server 1/7] schema: add pve-volume-id-or-absolute-path

2022-01-13 Thread Fabian Ebner
Signed-off-by: Dominic Jäger 
[split into its own patch + style fixes]
Signed-off-by: Fabian Ebner 
---

Changes from v9:
* Style fixes.

 PVE/QemuServer.pm | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0071a06..819eb5f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1054,11 +1054,17 @@ 
PVE::JSONSchema::register_format('pve-volume-id-or-qm-path', \&verify_volume_id_
 sub verify_volume_id_or_qm_path {
 my ($volid, $noerr) = @_;
 
-if ($volid eq 'none' || $volid eq 'cdrom' || $volid =~ m|^/|) {
-   return $volid;
-}
+return $volid if $volid eq 'none' || $volid eq 'cdrom';
+
+return verify_volume_id_or_absolute_path($volid, $noerr);
+}
+
+PVE::JSONSchema::register_format('pve-volume-id-or-absolute-path', 
\&verify_volume_id_or_absolute_path);
+sub verify_volume_id_or_absolute_path {
+my ($volid, $noerr) = @_;
+
+return $volid if $volid =~ m|^/|;
 
-# if its neither 'none' nor 'cdrom' nor a path, check if its a volume-id
 $volid = eval { PVE::JSONSchema::check_format('pve-volume-id', $volid, '') 
};
 if ($@) {
return if $noerr;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v10 manager 1/1] api: nodes: add readovf endpoint

2022-01-13 Thread Fabian Ebner
Signed-off-by: Dominic Jäger 
[split into its own patch + add to index]
Signed-off-by: Fabian Ebner 
---

Needs dependency bump for qemu-server.

Changes from v9:
* Add entry to /node/'s index.

 PVE/API2/Nodes.pm | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index d57a1937..5f6208d5 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -49,6 +49,7 @@ use PVE::API2::LXC;
 use PVE::API2::Network;
 use PVE::API2::NodeConfig;
 use PVE::API2::Qemu::CPU;
+use PVE::API2::Qemu::OVF;
 use PVE::API2::Qemu;
 use PVE::API2::Replication;
 use PVE::API2::Services;
@@ -71,6 +72,11 @@ __PACKAGE__->register_method ({
 path => 'qemu',
 });
 
+__PACKAGE__->register_method ({
+subclass => "PVE::API2::Qemu::OVF",
+path => 'readovf',
+});
+
 __PACKAGE__->register_method ({
 subclass => "PVE::API2::LXC",
 path => 'lxc',
@@ -233,6 +239,7 @@ __PACKAGE__->register_method ({
{ name => 'network' },
{ name => 'qemu' },
{ name => 'query-url-metadata' },
+   { name => 'readovf' },
{ name => 'replication' },
{ name => 'report' },
{ name => 'rrd' }, # fixme: remove?
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v10 qemu-server 3/7] api: add endpoint for parsing .ovf files

2022-01-13 Thread Fabian Ebner
Co-developed-by: Fabian Grünbichler 
Signed-off-by: Dominic Jäger 
[split into its own patch + minor improvements/style fixes]
Signed-off-by: Fabian Ebner 
---

Changes from v9:
* Include $! in the error for the file check.
* Have json_ovf_properties return all of them rather than just
  the disk-related ones, and add description for cpu/name/memory.
* Tiny style fixes foreach -> for, etc.

The file check can also fail because of permission problems, since the
API endpoint is not protected => 1, when used via the web UI and when
the manifest is at a location not accessible to www-data (e.g. in
/root/ on a default installation).

Including $! in the error message helps of course, but I'm sure
there'll be users wondering why they get permission errors while being
logged in as root in the web UI. Not sure what to do about it though.

 PVE/API2/Qemu/Makefile |  2 +-
 PVE/API2/Qemu/OVF.pm   | 55 ++
 PVE/QemuServer.pm  | 32 
 3 files changed, 88 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Qemu/OVF.pm

diff --git a/PVE/API2/Qemu/Makefile b/PVE/API2/Qemu/Makefile
index 5d4abda..bdd4762 100644
--- a/PVE/API2/Qemu/Makefile
+++ b/PVE/API2/Qemu/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Agent.pm CPU.pm Machine.pm
+SOURCES=Agent.pm CPU.pm Machine.pm OVF.pm
 
 .PHONY: install
 install:
diff --git a/PVE/API2/Qemu/OVF.pm b/PVE/API2/Qemu/OVF.pm
new file mode 100644
index 000..b1d79d2
--- /dev/null
+++ b/PVE/API2/Qemu/OVF.pm
@@ -0,0 +1,55 @@
+package PVE::API2::Qemu::OVF;
+
+use strict;
+use warnings;
+
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::QemuServer::OVF;
+use PVE::RESTHandler;
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+name => 'index',
+path => '',
+method => 'GET',
+proxyto => 'node',
+description => "Read an .ovf manifest.",
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   manifest => {
+   description => ".ovf manifest",
+   type => 'string',
+   },
+   },
+},
+returns => {
+   description => "VM config according to .ovf manifest.",
+   type => "object",
+},
+returns => {
+   type => 'object',
+   additionalProperties => 1,
+   properties => PVE::QemuServer::json_ovf_properties({}),
+},
+code => sub {
+   my ($param) = @_;
+
+   my $manifest = $param->{manifest};
+   die "check for file $manifest failed - $!\n" if !-f $manifest;
+
+   my $parsed = PVE::QemuServer::OVF::parse_ovf($manifest);
+   my $result;
+   $result->{cores} = $parsed->{qm}->{cores};
+   $result->{name} =  $parsed->{qm}->{name};
+   $result->{memory} = $parsed->{qm}->{memory};
+   my $disks = $parsed->{disks};
+   for my $disk (@$disks) {
+   $result->{$disk->{disk_address}} = $disk->{backing_file};
+   }
+   return $result;
+}});
+
+1;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 819eb5f..98eb6b3 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2221,6 +2221,38 @@ sub json_config_properties {
 return $prop;
 }
 
+# Properties that we can read from an OVF file
+sub json_ovf_properties {
+my $prop = shift;
+
+for my $device (PVE::QemuServer::Drive::valid_drive_names()) {
+   $prop->{$device} = {
+   type => 'string',
+   format => 'pve-volume-id-or-absolute-path',
+   description => "Disk image that gets imported to $device",
+   optional => 1,
+   };
+}
+
+$prop->{cores} = {
+   type => 'integer',
+   description => "The number of CPU cores.",
+   optional => 1,
+};
+$prop->{memory} = {
+   type => 'integer',
+   description => "Amount of RAM for the VM in MB.",
+   optional => 1,
+};
+$prop->{name} = {
+   type => 'string',
+   description => "Name of the VM.",
+   optional => 1,
+};
+
+return $prop;
+}
+
 # return copy of $confdesc_cloudinit to generate documentation
 sub cloudinit_config_properties {
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC v10 qemu-server 5/7] schema: drive: use separate schema when disk allocation is possible

2022-01-13 Thread Fabian Ebner
via the special syntax :.

Not worth it by itself, but this is anticipating a new 'import-from'
parameter which is only used upon import/allocation, but shouldn't be
part of the schema for the config or other API enpoints.

Signed-off-by: Fabian Ebner 
---

New in v10.

 PVE/API2/Qemu.pm| 12 ++--
 PVE/QemuServer.pm   |  9 --
 PVE/QemuServer/Drive.pm | 62 +
 3 files changed, 60 insertions(+), 23 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 6992f6f..e6a6cdc 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -570,7 +570,9 @@ __PACKAGE__->register_method({
default => 0,
description => "Start VM after it was created 
successfully.",
},
-   }),
+   },
+   1, # with_disk_alloc
+   ),
 },
 returns => {
type => 'string',
@@ -1545,7 +1547,9 @@ __PACKAGE__->register_method({
maximum => 30,
optional => 1,
},
-   }),
+   },
+   1, # with_disk_alloc
+   ),
 },
 returns => {
type => 'string',
@@ -1593,7 +1597,9 @@ __PACKAGE__->register_method({
maxLength => 40,
optional => 1,
},
-   }),
+   },
+   1, # with_disk_alloc
+   ),
 },
 returns => { type => 'null' },
 code => sub {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c18e1f6..f880f32 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2202,7 +2202,7 @@ sub verify_usb_device {
 
 # add JSON properties for create and set function
 sub json_config_properties {
-my $prop = shift;
+my ($prop, $with_disk_alloc) = @_;
 
 my $skip_json_config_opts = {
parent => 1,
@@ -2215,7 +2215,12 @@ sub json_config_properties {
 
 foreach my $opt (keys %$confdesc) {
next if $skip_json_config_opts->{$opt};
-   $prop->{$opt} = $confdesc->{$opt};
+
+   if ($with_disk_alloc && is_valid_drivename($opt)) {
+   $prop->{$opt} = 
$PVE::QemuServer::Drive::drivedesc_hash_with_alloc->{$opt};
+   } else {
+   $prop->{$opt} = $confdesc->{$opt};
+   }
 }
 
 return $prop;
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 7b82fb2..f024269 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -3,6 +3,8 @@ package PVE::QemuServer::Drive;
 use strict;
 use warnings;
 
+use Storable qw(dclone);
+
 use PVE::Storage;
 use PVE::JSONSchema qw(get_standard_option);
 
@@ -33,6 +35,8 @@ our $MAX_SATA_DISKS = 6;
 our $MAX_UNUSED_DISKS = 256;
 
 our $drivedesc_hash;
+# Schema when disk allocation is possible.
+our $drivedesc_hash_with_alloc = {};
 
 my %drivedesc_base = (
 volume => { alias => 'file' },
@@ -262,14 +266,10 @@ my $ide_fmt = {
 };
 PVE::JSONSchema::register_format("pve-qm-ide", $ide_fmt);
 
-my $ALLOCATION_SYNTAX_DESC =
-"Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume.";
-
 my $idedesc = {
 optional => 1,
 type => 'string', format => $ide_fmt,
-description => "Use volume as IDE hard disk or CD-ROM (n is 0 to " 
.($MAX_IDE_DISKS -1) . "). " .
-   $ALLOCATION_SYNTAX_DESC,
+description => "Use volume as IDE hard disk or CD-ROM (n is 0 to " 
.($MAX_IDE_DISKS - 1) . ").",
 };
 PVE::JSONSchema::register_standard_option("pve-qm-ide", $idedesc);
 
@@ -285,8 +285,7 @@ my $scsi_fmt = {
 my $scsidesc = {
 optional => 1,
 type => 'string', format => $scsi_fmt,
-description => "Use volume as SCSI hard disk or CD-ROM (n is 0 to " . 
($MAX_SCSI_DISKS - 1) . "). " .
-   $ALLOCATION_SYNTAX_DESC,
+description => "Use volume as SCSI hard disk or CD-ROM (n is 0 to " . 
($MAX_SCSI_DISKS - 1) . ").",
 };
 PVE::JSONSchema::register_standard_option("pve-qm-scsi", $scsidesc);
 
@@ -298,8 +297,7 @@ my $sata_fmt = {
 my $satadesc = {
 optional => 1,
 type => 'string', format => $sata_fmt,
-description => "Use volume as SATA hard disk or CD-ROM (n is 0 to " . 
($MAX_SATA_DISKS - 1). "). " .
-   $ALLOCATION_SYNTAX_DESC,
+description => "Use volume as SATA hard disk or CD-ROM (n is 0 to " . 
($MAX_SATA_DISKS - 1). ").",
 };
 PVE::JSONSchema::register_standard_option("pve-qm-sata", $satadesc);
 
@@ -311,8 +309,7 @@ my $virtio_fmt = {
 my $virtiodesc = {
 optional => 1,
 type => 'string', format => $virtio_fmt,
-description => "Use volume as VIRTIO hard disk (n is 0 to " . 
($MAX_VIRTIO_DISKS - 1) . "). " .
-   $ALLOCATION_SYNTAX_DESC,
+description => "Use volume as VIRTIO hard disk (n is 0 to " . 
($MAX_VIRTIO_DISKS - 1) . ").",
 };
 PVE::JSONSchema::register_standard_option("pve-qm-virtio", $virtiodesc);
 
@@ -359,9 +356,7 @@ my $efidisk_fmt = {
 my $efidisk_desc = {
 optional => 1,
 type => 'string', format => $efidisk_fmt,
-description => "Configure a Disk for storing EFI vars. " .
-   $ALLOCATION_SYNTAX_D

[pve-devel] [RFC v10 qemu-server/manager] API for disk import and OVF

2022-01-13 Thread Fabian Ebner
Extend qm importdisk/importovf functionality to the API.


Used Dominic's latest version[0] as a starting point. GUI part still
needs to be rebased/updated, so it's not included here.


Changes from v9:

* Split patch into smaller parts

* Some minor (style) fixes/improvements (see individual patches)

* Drop $manifest_only parameter for parse_ovf

Instead, untaint the path when calling file_size_info, which makes
the call also work via API which uses the -T switch. If we do want
to keep $manifest_only, I'd argue that it should also skip the
file existence check and not only the file size check. Opinions?

* Re-use do_import function

Rather than duplicating most of it. The down side is the need to
add a new parameter for skipping configuration update. But I
suppose the plan is to have qm import switch to the new API at
some point, and then do_import can be simplified.

* Instead of adding an import-sources parameter to the API, use a new
  import-from property for the disk, that's only available with
  import/alloc-enabled API endpoints via its own version of the schema

Avoids the split across regular drive key parameters and
'import_soruces', which avoids quite a bit of cross-checking
between the two and parsing/passing around the latter.

The big downsides are:
* Schema handling is a bit messy.
* Need to special case print_drive, because we do intermediate
  parse/print to cleanup drive paths. At a first glance, this
  seems not too easy to avoid without complicating things elsewehere.
* Using the import-aware parse_drive in parse_volume, because that
  is used via the foreach_volume iterators handling the parameters
  of the import-enabled endpoints. Could be avoided by using for
  loops with the import-aware parse_drive instead of
  foreach_volume.

Counter-arguments for using a single schema (citing Fabian G.):
* docs/schema dump/api docs: shouldn't look like you can put that
  everywhere where we use the config schema
* shouldn't have nasty side-effects if someone puts it into the
  config


After all, the 'import-from' disk property approach turned out to be
a uglier than I hoped it would.

My problem with the 'import-sources' API parameter approach (see [0]
for details) is that it requires specifying both
scsi0: :-1,
import-sources: scsi0=/path/or/volid/for/source
leading to a not ideal user interface and parameter handling needing
cross-checks to verify that the right combination is specified, and
passing both around at the same time.

Another approach would be adding a new special volid syntax using
my $IMPORT_DISK_RE = qr!^(([^/:\s]+):)import:(.*)$!;
allowing for e.g.
qm set 126 -scsi1 rbdkvm:import:myzpool:vm-114-disk-0,aio=native
qm set 126 -scsi2 rbdkvm:import:/dev/zvol/myzpool/vm-114-disk-1,backup=0
Yes, it's a hack, but it would avoid the pain points from both other
approaches and be very simple. See the end of the mail for a POC.


[0]: https://lists.proxmox.com/pipermail/pve-devel/2021-June/048564.html


pve-manager:

Fabian Ebner (1):
  api: nodes: add readovf endpoint

 PVE/API2/Nodes.pm | 7 +++
 1 file changed, 7 insertions(+)


qemu-server:

Dominic Jäger (1):
  api: support VM disk import

Fabian Ebner (6):
  schema: add pve-volume-id-or-absolute-path
  parse ovf: untaint path when calling file_size_info
  api: add endpoint for parsing .ovf files
  image convert: allow block device as source
  schema: drive: use separate schema when disk allocation is possible
  api: create disks: factor out common part from if/else

 PVE/API2/Qemu.pm | 86 +++--
 PVE/API2/Qemu/Makefile   |  2 +-
 PVE/API2/Qemu/OVF.pm | 55 +
 PVE/QemuConfig.pm|  2 +-
 PVE/QemuServer.pm| 57 +++---
 PVE/QemuServer/Drive.pm  | 92 +++-
 PVE/QemuServer/ImportDisk.pm |  2 +-
 PVE/QemuServer/OVF.pm|  9 ++--
 8 files changed, 242 insertions(+), 63 deletions(-)
 create mode 100644 PVE/API2/Qemu/OVF.pm

-- 
2.30.2

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 6992f6f..6a22899 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -21,8 +21,9 @@ use PVE::ReplicationConfig;
 use PVE::GuestHelpers;
 use PVE::QemuConfig;
 use PVE::QemuServer;
-use PVE::QemuServer::Drive;
 use PVE::QemuServer::CPUConfig;
+use PVE::QemuServer::Drive;
+use PVE::QemuServer::ImportDisk;
 use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuServer::Machine;
 use PVE::QemuMigrate;
@@ -64,6 +65,7 @@ my $resolve_cdrom_alias = sub {
 };
 
 my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
+my $IMPORT_DISK_RE = qr!^(([^/:\s]+):)import:(.*)$!;
 my $check_storage_access = sub {
my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
 
@@ -86,6 +88,9 @@ my $check_storage_access = sub {
my $scfg = PVE::Storage::storage_config($st

[pve-devel] [PATCH v2 guest-common 1/2] config: remove unused variable

2022-01-13 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 src/PVE/AbstractConfig.pm | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/PVE/AbstractConfig.pm b/src/PVE/AbstractConfig.pm
index 4ea6c80..0c40062 100644
--- a/src/PVE/AbstractConfig.pm
+++ b/src/PVE/AbstractConfig.pm
@@ -877,8 +877,6 @@ my $snapshot_delete_assert_not_needed_by_replication = sub {
 sub snapshot_delete {
 my ($class, $vmid, $snapname, $force, $drivehash) = @_;
 
-my $prepare = 1;
-
 my $unused = [];
 
 my $conf = $class->load_config($vmid);
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 container 2/3] config: parse_volume: don't die when noerr is set

2022-01-13 Thread Fabian Ebner
AFAICT, the only existing callers using noerr=1 are in
__snapshot_delete_remove_drive, and in AbstractConfig's
foreach_volume_full. The former should not be affected, as unknown
keys should never make their way in there. For the latter, it makes
iterating with
$opts = { extra_keys => ['vmstate'] }
possible while being agnostic of guest type. Previously, it would die
for LXC configs, but now the unknown key is simply skipped there.

Signed-off-by: Fabian Ebner 
---
 src/PVE/LXC/Config.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 32d990c..7db023c 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1191,7 +1191,9 @@ sub parse_volume {
return $parse_ct_mountpoint_full->($class, $unused_desc, 
$volume_string, $noerr);
 }
 
-die "parse_volume - unknown type: $key\n";
+die "parse_volume - unknown type: $key\n" if !$noerr;
+
+return;
 }
 
 sub print_volume {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 guest-common 2/2] config: activate affected storages for snapshot operations

2022-01-13 Thread Fabian Ebner
For snapshot creation, the storage for the vmstate file is activated
via vdisk_alloc when the state file is created.

Do not activate the volumes themselves, as that has unnecessary side
effects (e.g. waiting for zvol device link for ZFS, mapping the volume
for RBD). If a storage can only do snapshot operations on a volume
that has been activated, it needs to activate the volume itself.

The actual implementation will be in the plugins to be able to skip
CD ROM drives and bind-mounts, etc.

Signed-off-by: Fabian Ebner 
---
 src/PVE/AbstractConfig.pm | 13 +
 1 file changed, 13 insertions(+)

diff --git a/src/PVE/AbstractConfig.pm b/src/PVE/AbstractConfig.pm
index 0c40062..2d15388 100644
--- a/src/PVE/AbstractConfig.pm
+++ b/src/PVE/AbstractConfig.pm
@@ -786,6 +786,13 @@ sub __snapshot_commit {
 $class->lock_config($vmid, $updatefn);
 }
 
+# Activates the storages affected by the snapshot operations.
+sub __snapshot_activate_storages {
+my ($class, $conf, $include_vmstate) = @_;
+
+return; # FIXME PVE 8.x change to die 'implement me' and bump Breaks for 
older plugins
+}
+
 # Creates a snapshot for the VM/CT.
 sub snapshot_create {
 my ($class, $vmid, $snapname, $save_vmstate, $comment) = @_;
@@ -801,6 +808,8 @@ sub snapshot_create {
 my $drivehash = {};
 
 eval {
+   $class->__snapshot_activate_storages($conf, 0);
+
if ($freezefs) {
$class->__snapshot_freeze($vmid, 0);
}
@@ -884,6 +893,8 @@ sub snapshot_delete {
 
 die "snapshot '$snapname' does not exist\n" if !defined($snap);
 
+$class->__snapshot_activate_storages($snap, 1) if !$drivehash;
+
 $snapshot_delete_assert_not_needed_by_replication->($class, $vmid, $conf, 
$snap, $snapname)
if !$drivehash && !$force;
 
@@ -1085,6 +1096,8 @@ sub snapshot_rollback {
$snap = $get_snapshot_config->($conf);
 
if ($prepare) {
+   $class->__snapshot_activate_storages($snap, 1);
+
$rollback_remove_replication_snapshots->($class, $vmid, $snap, 
$snapname);
 
$class->foreach_volume($snap, sub {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 qemu-server 1/1] snapshot: implement __snapshot_activate_storages

2022-01-13 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---

Build-depends on guest-common.

 PVE/QemuConfig.pm | 19 +++
 .../create/qemu-server/303.conf   | 13 +++
 .../delete/qemu-server/204.conf   | 33 ++
 .../rollback/qemu-server/303.conf | 34 +++
 .../create/qemu-server/303.conf   | 13 +++
 .../delete/qemu-server/204.conf   | 33 ++
 .../rollback/qemu-server/303.conf | 34 +++
 test/snapshot-test.pm | 32 +
 8 files changed, 211 insertions(+)
 create mode 100644 test/snapshot-expected/create/qemu-server/303.conf
 create mode 100644 test/snapshot-expected/delete/qemu-server/204.conf
 create mode 100644 test/snapshot-expected/rollback/qemu-server/303.conf
 create mode 100644 test/snapshot-input/create/qemu-server/303.conf
 create mode 100644 test/snapshot-input/delete/qemu-server/204.conf
 create mode 100644 test/snapshot-input/rollback/qemu-server/303.conf

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index b993378..cfef8d3 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -241,6 +241,25 @@ sub __snapshot_save_vmstate {
 return $statefile;
 }
 
+sub __snapshot_activate_storages {
+my ($class, $conf, $include_vmstate) = @_;
+
+my $storecfg = PVE::Storage::config();
+my $opts = $include_vmstate ? { 'extra_keys' => ['vmstate'] } : {};
+my $storage_hash = {};
+
+$class->foreach_volume_full($conf, $opts, sub {
+   my ($key, $drive) = @_;
+
+   return if PVE::QemuServer::drive_is_cdrom($drive);
+
+   my ($storeid) = PVE::Storage::parse_volume_id($drive->{file});
+   $storage_hash->{$storeid} = 1;
+});
+
+PVE::Storage::activate_storage_list($storecfg, [ sort keys 
$storage_hash->%* ]);
+}
+
 sub __snapshot_check_running {
 my ($class, $vmid) = @_;
 return PVE::QemuServer::Helpers::vm_running_locally($vmid);
diff --git a/test/snapshot-expected/create/qemu-server/303.conf 
b/test/snapshot-expected/create/qemu-server/303.conf
new file mode 100644
index 000..2731bd1
--- /dev/null
+++ b/test/snapshot-expected/create/qemu-server/303.conf
@@ -0,0 +1,13 @@
+bootdisk: ide0
+cores: 4
+ide0: local:snapshotable-disk-1,discard=on,size=32G
+ide2: none,media=cdrom
+machine: q35
+memory: 8192
+name: win
+net0: e1000=12:34:56:78:90:12,bridge=somebr0,firewall=1
+numa: 0
+ostype: win7
+smbios1: uuid=01234567-890a-bcde-f012-34567890abcd
+sockets: 1
+vga: qxl
diff --git a/test/snapshot-expected/delete/qemu-server/204.conf 
b/test/snapshot-expected/delete/qemu-server/204.conf
new file mode 100644
index 000..c521154
--- /dev/null
+++ b/test/snapshot-expected/delete/qemu-server/204.conf
@@ -0,0 +1,33 @@
+agent: 1
+bootdisk: ide0
+cores: 4
+ide0: local:snapshotable-disk-1,discard=on,size=32G
+ide2: none,media=cdrom
+memory: 8192
+name: win
+net0: e1000=12:34:56:78:90:12,bridge=somebr0,firewall=1
+numa: 0
+ostype: win7
+parent: test
+smbios1: uuid=01234567-890a-bcde-f012-34567890abcd
+sockets: 1
+vga: qxl
+
+[test]
+#test comment
+agent: 1
+bootdisk: ide0
+cores: 4
+ide0: local:snapshotable-disk-1,discard=on,size=32G
+ide2: none,media=cdrom
+machine: somemachine
+memory: 8192
+name: win
+net0: e1000=12:34:56:78:90:12,bridge=somebr0,firewall=1
+numa: 0
+ostype: win7
+smbios1: uuid=01234567-890a-bcde-f012-34567890abcd
+snaptime: 1234567890
+sockets: 1
+vga: qxl
+vmstate: somestorage:state-volume
diff --git a/test/snapshot-expected/rollback/qemu-server/303.conf 
b/test/snapshot-expected/rollback/qemu-server/303.conf
new file mode 100644
index 000..518c954
--- /dev/null
+++ b/test/snapshot-expected/rollback/qemu-server/303.conf
@@ -0,0 +1,34 @@
+agent: 1
+bootdisk: ide0
+cores: 4
+ide0: local:snapshotable-disk-1,discard=on,size=32G
+ide2: none,media=cdrom
+memory: 8192
+name: win
+net0: e1000=12:34:56:78:90:12,bridge=somebr0,firewall=1
+numa: 0
+ostype: win7
+parent: test
+smbios1: uuid=01234567-890a-bcde-f012-34567890abcd
+sockets: 1
+vga: qxl
+
+[test]
+#test comment
+agent: 1
+bootdisk: ide0
+cores: 4
+ide0: local:snapshotable-disk-1,discard=on,size=32G
+ide2: none,media=cdrom
+machine: q35
+memory: 8192
+name: win
+net0: e1000=12:34:56:78:90:12,bridge=somebr0,firewall=1
+numa: 0
+ostype: win7
+runningmachine: somemachine
+smbios1: uuid=01234567-890a-bcde-f012-34567890abcd
+snaptime: 1234567890
+sockets: 1
+vga: qxl
+vmstate: somestorage:state-volume
diff --git a/test/snapshot-input/create/qemu-server/303.conf 
b/test/snapshot-input/create/qemu-server/303.conf
new file mode 100644
index 000..2731bd1
--- /dev/null
+++ b/test/snapshot-input/create/qemu-server/303.conf
@@ -0,0 +1,13 @@
+bootdisk: ide0
+cores: 4
+ide0: local:snapshotable-disk-1,discard=on,size=32G
+ide2: none,media=cdrom
+machine: q35
+memory: 8192
+name: win
+net0: e1000=12:34:56:78:90:12,bridge=somebr0,firewall=1
+numa: 0
+ostype: win7
+smbios1: uuid=01234567-890a-bcde-f012-34567890abcd
+sockets: 1
+

[pve-devel] [PATCH v2 container 1/3] config: snapshot_delete_remove_drive: check for parsed value

2022-01-13 Thread Fabian Ebner
parse_volume is called with noerr=1, so this might be undef instead
of the hash we expect.

Signed-off-by: Fabian Ebner 
---
 src/PVE/LXC/Config.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 6c2acd6..32d990c 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -192,7 +192,7 @@ sub __snapshot_delete_remove_drive {
delete $snap->{$remove_drive};
 
$class->add_unused_volume($snap, $mountpoint->{volume})
-   if ($mountpoint->{type} eq 'volume');
+   if $mountpoint && ($mountpoint->{type} eq 'volume');
 }
 }
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH-SERIES v2 guest-common/qemu-server/container] activate storages for snapshot operations

2022-01-13 Thread Fabian Ebner
to make it work when the storage is just not active yet, and have
early errors when the storage cannot be activated. Also prohibits
snapshot operations when an involved storage is disabled, but
otherwise available.


Both qemu-server and pve-container build-depend upon pve-guest-common
for the added tests.


Changes from v1:
* Rebase on current master.


guest-common:

Fabian Ebner (2):
  config: remove unused variable
  config: activate affected storages for snapshot operations

 src/PVE/AbstractConfig.pm | 15 +--
 1 file changed, 13 insertions(+), 2 deletions(-)


qemu-server:

Fabian Ebner (1):
  snapshot: implement __snapshot_activate_storages

 PVE/QemuConfig.pm | 19 +++
 .../create/qemu-server/303.conf   | 13 +++
 .../delete/qemu-server/204.conf   | 33 ++
 .../rollback/qemu-server/303.conf | 34 +++
 .../create/qemu-server/303.conf   | 13 +++
 .../delete/qemu-server/204.conf   | 33 ++
 .../rollback/qemu-server/303.conf | 34 +++
 test/snapshot-test.pm | 32 +
 8 files changed, 211 insertions(+)
 create mode 100644 test/snapshot-expected/create/qemu-server/303.conf
 create mode 100644 test/snapshot-expected/delete/qemu-server/204.conf
 create mode 100644 test/snapshot-expected/rollback/qemu-server/303.conf
 create mode 100644 test/snapshot-input/create/qemu-server/303.conf
 create mode 100644 test/snapshot-input/delete/qemu-server/204.conf
 create mode 100644 test/snapshot-input/rollback/qemu-server/303.conf


container:

Fabian Ebner (3):
  config: snapshot_delete_remove_drive: check for parsed value
  config: parse_volume: don't die when noerr is set
  snapshot: implement __snapshot_activate_storages

 src/PVE/LXC/Config.pm | 25 +--
 .../snapshot-expected/create/lxc/204.conf | 10 ++
 .../snapshot-expected/delete/lxc/204.conf | 25 +++
 .../snapshot-expected/rollback/lxc/209.conf   | 29 +
 src/test/snapshot-input/create/lxc/204.conf   | 10 ++
 src/test/snapshot-input/delete/lxc/204.conf   | 25 +++
 src/test/snapshot-input/rollback/lxc/209.conf | 29 +
 src/test/snapshot-test.pm | 32 +++
 8 files changed, 183 insertions(+), 2 deletions(-)
 create mode 100644 src/test/snapshot-expected/create/lxc/204.conf
 create mode 100644 src/test/snapshot-expected/delete/lxc/204.conf
 create mode 100644 src/test/snapshot-expected/rollback/lxc/209.conf
 create mode 100644 src/test/snapshot-input/create/lxc/204.conf
 create mode 100644 src/test/snapshot-input/delete/lxc/204.conf
 create mode 100644 src/test/snapshot-input/rollback/lxc/209.conf

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 container 3/3] snapshot: implement __snapshot_activate_storages

2022-01-13 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---

Build depends on guest-common.

 src/PVE/LXC/Config.pm | 19 +++
 .../snapshot-expected/create/lxc/204.conf | 10 ++
 .../snapshot-expected/delete/lxc/204.conf | 25 +++
 .../snapshot-expected/rollback/lxc/209.conf   | 29 +
 src/test/snapshot-input/create/lxc/204.conf   | 10 ++
 src/test/snapshot-input/delete/lxc/204.conf   | 25 +++
 src/test/snapshot-input/rollback/lxc/209.conf | 29 +
 src/test/snapshot-test.pm | 32 +++
 8 files changed, 179 insertions(+)
 create mode 100644 src/test/snapshot-expected/create/lxc/204.conf
 create mode 100644 src/test/snapshot-expected/delete/lxc/204.conf
 create mode 100644 src/test/snapshot-expected/rollback/lxc/209.conf
 create mode 100644 src/test/snapshot-input/create/lxc/204.conf
 create mode 100644 src/test/snapshot-input/delete/lxc/204.conf
 create mode 100644 src/test/snapshot-input/rollback/lxc/209.conf

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 7db023c..9429c59 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -101,6 +101,25 @@ sub __snapshot_save_vmstate {
 die "implement me - snapshot_save_vmstate\n";
 }
 
+sub __snapshot_activate_storages {
+my ($class, $conf, $include_vmstate) = @_;
+
+my $storecfg = PVE::Storage::config();
+my $opts = $include_vmstate ? { 'extra_keys' => ['vmstate'] } : {};
+my $storage_hash = {};
+
+$class->foreach_volume_full($conf, $opts, sub {
+   my ($vs, $mountpoint) = @_;
+
+   return if $mountpoint->{type} ne 'volume';
+
+   my ($storeid) = PVE::Storage::parse_volume_id($mountpoint->{volume});
+   $storage_hash->{$storeid} = 1;
+});
+
+PVE::Storage::activate_storage_list($storecfg, [ sort keys 
$storage_hash->%* ]);
+}
+
 sub __snapshot_check_running {
 my ($class, $vmid) = @_;
 return PVE::LXC::check_running($vmid);
diff --git a/src/test/snapshot-expected/create/lxc/204.conf 
b/src/test/snapshot-expected/create/lxc/204.conf
new file mode 100644
index 000..4546668
--- /dev/null
+++ b/src/test/snapshot-expected/create/lxc/204.conf
@@ -0,0 +1,10 @@
+arch: amd64
+cpulimit: 1
+cpuunits: 1024
+hostname: test
+memory: 2048
+mp0: local:unsnapshotable-disk-1,mp=/invalid/mountpoint
+net0: 
bridge=vmbr0,hwaddr=12:34:56:78:90:12,ip=dhcp,ip6=dhcp,name=eth0,type=veth
+ostype: redhat
+rootfs: local:snapshotable-disk-1
+swap: 512
diff --git a/src/test/snapshot-expected/delete/lxc/204.conf 
b/src/test/snapshot-expected/delete/lxc/204.conf
new file mode 100644
index 000..a21c535
--- /dev/null
+++ b/src/test/snapshot-expected/delete/lxc/204.conf
@@ -0,0 +1,25 @@
+arch: amd64
+cpulimit: 1
+cpuunits: 1024
+hostname: test
+memory: 2048
+mp0: local:unsnapshotable-disk-1,mp=/invalid/mountpoint
+net0: 
bridge=vmbr0,hwaddr=12:34:56:78:90:12,ip=dhcp,ip6=dhcp,name=eth0,type=veth
+ostype: redhat
+parent: test
+rootfs: local:snapshotable-disk-1
+swap: 512
+
+[test]
+#test comment
+arch: amd64
+cpulimit: 1
+cpuunits: 1024
+hostname: test
+memory: 2048
+mp0: local:unsnapshotable-disk-1,mp=/invalid/mountpoint
+net0: 
bridge=vmbr0,hwaddr=12:34:56:78:90:12,ip=dhcp,ip6=dhcp,name=eth0,type=veth
+ostype: redhat
+rootfs: local:snapshotable-disk-1
+snaptime: 1234567890
+swap: 512
diff --git a/src/test/snapshot-expected/rollback/lxc/209.conf 
b/src/test/snapshot-expected/rollback/lxc/209.conf
new file mode 100644
index 000..c9a23c9
--- /dev/null
+++ b/src/test/snapshot-expected/rollback/lxc/209.conf
@@ -0,0 +1,29 @@
+# should be preserved
+arch: amd64
+cpulimit: 1
+cpuunits: 1024
+hostname: test
+memory: 2048
+mp0: local:snapshotable-disk-2,mp=/invalid/mp0
+mp1: local:unsnapshotable-disk-1,mp=/invalid/mp1
+net0: 
bridge=vmbr0,hwaddr=12:34:56:78:90:12,ip=dhcp,ip6=dhcp,name=eth0,type=veth
+ostype: redhat
+parent: test
+rootfs: local:snapshotable-disk-1
+swap: 512
+unused0: preserved:some-disk-1
+
+[test]
+# should be thrown away
+arch: amd64
+cpulimit: 2
+cpuunits: 2048
+hostname: test2
+memory: 4096
+mp0: local:snapshotable-disk-2,mp=/invalid/mp0
+mp1: local:snapshotable-disk-4,mp=/invalid/mp1
+net0: 
bridge=vmbr0,hwaddr=12:34:56:78:90:12,ip=dhcp,ip6=dhcp,name=eth0,type=veth
+ostype: redhat
+rootfs: local:snapshotable-disk-1
+snaptime: 1234567890
+swap: 1024
diff --git a/src/test/snapshot-input/create/lxc/204.conf 
b/src/test/snapshot-input/create/lxc/204.conf
new file mode 100644
index 000..4546668
--- /dev/null
+++ b/src/test/snapshot-input/create/lxc/204.conf
@@ -0,0 +1,10 @@
+arch: amd64
+cpulimit: 1
+cpuunits: 1024
+hostname: test
+memory: 2048
+mp0: local:unsnapshotable-disk-1,mp=/invalid/mountpoint
+net0: 
bridge=vmbr0,hwaddr=12:34:56:78:90:12,ip=dhcp,ip6=dhcp,name=eth0,type=veth
+ostype: redhat
+rootfs: local:snapshotable-disk-1
+swap: 512
diff --git a/src/test/snapshot-input/delete/lxc/204.conf 
b/src/test/snapshot-input/delete/lxc/204.conf
new file mode 100644
index 000..a21c535
--- 

Re: [pve-devel] [PATCH-SERIES v2 storage/widget-toolkit/manager] fix #3610: properly build ZFS detail tree

2022-01-13 Thread Fabian Ebner

Ping for the widget-toolkit and manager patches.

Am 10.09.21 um 13:45 schrieb Fabian Ebner:

Correctly add top-level vdevs like log or special as children of the
root, instead of the previous outer vdev.

Since the API returns only one element, it has to be the pool itself,
but there also is a vdev with the same name. In the GUI, hide the
pool and show its health as part of the upper grid instead.


Changes from v1:
 * add GUI patches


See the pve-manager patch for the needed dependency bumps.


pve-storage:

Fabian Ebner (1):
   fix #3610: properly build ZFS detail tree

  PVE/API2/Disks/ZFS.pm | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)


proxmox-widget-toolkit:

Fabian Ebner (2):
   zfs detail: increase window height
   zfs detail: hide the pool itself in tree view

  src/window/ZFSDetail.js | 9 +++--
  1 file changed, 7 insertions(+), 2 deletions(-)


pve-manager:

Fabian Ebner (1):
   ui: node: zfs: use ZFSDetail window from widget-toolkit

  www/manager6/node/ZFS.js | 167 +--
  1 file changed, 3 insertions(+), 164 deletions(-)




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH-SERIES v2 storage/widget-toolkit/manager] fix #3610: properly build ZFS detail tree

2022-01-13 Thread Thomas Lamprecht
On 13.01.22 12:21, Fabian Ebner wrote:
> Ping for the widget-toolkit and manager patches.
> 

applied remaining parts of this series, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager] add 'auto' mode for noVNC scaling

2022-01-13 Thread Thomas Lamprecht
On 13.01.22 08:56, Dominik Csapak wrote:
> in commit
> 69c6561f820b4fdb5625ead767889155db9f1539 ("ui: fix novnc scaling radio 
> button")
> 
> we always set to 'scale' when no value was set, but a non-set value
> actually had a different behaviour:
> 
> in the embedded console it was set to 'scale', but in the pop-up it was
> set to 'off'.
> 
> to restore this behaviour, introduce an option 'auto' which unsets the
> 'novnc-scaling' setting
> 
> Signed-off-by: Dominik Csapak 
> ---
>  www/manager6/window/Settings.js | 16 ++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH zfsonlinux/stable-6 0/2] update zfs to 2.0.7

2022-01-13 Thread Thomas Lamprecht
On 11.01.22 16:02, Stoiko Ivanov wrote:
> 2.0.7 contains a few commits which might affect our users e.g.:
> `ZFS send/recv with ashift 9->12 leads to data corruption`
> 
> the second commit is a cherry-pick from our current master
> (abigail failures should not cause the build to abort)
> 
> built and booted the kernel on one of our hardware-testhosts
> 
> Aron Xu (1):
>   d/rules: allow abigail to fail
> 
> Stoiko Ivanov (1):
>   update submodule and patches to ZFS 2.0.7
> 
>  debian/patches/0005-Enable-zed-emails.patch | 2 +-
>  debian/patches/0007-Use-installed-python3.patch | 6 +++---
>  debian/rules| 2 +-
>  upstream| 2 +-
>  4 files changed, 6 insertions(+), 6 deletions(-)
> 

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH http-server] fix #3807: don't attempt response on closed handle

2022-01-13 Thread Thomas Lamprecht
On 29.12.21 12:15, Fabian Grünbichler wrote:
> if a client closes the connection while the API server is
> waiting/stalling here, the handle will disappear, and sending a response
> is no longer possible.
> 
> (this issue is only cosmetic, but if such clients are a regular
> occurrence it might get quite noisy in the logs)
> 
> Signed-off-by: Fabian Grünbichler 
> ---
>  src/PVE/APIServer/AnyEvent.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] partially-applied-series: [PATCH http-server/manager/pmg-api/docs 0/10] expose more TLS knobs

2022-01-13 Thread Thomas Lamprecht
On 17.12.21 13:57, Fabian Grünbichler wrote:
> this series adds the following options to /etc/default/$proxy, and
> corresponding handling in pveproxy/pmgproxy/api-server:
> 
> - TLS 1.3 ciphersuites (these are different to < 1.3 cipher lists)
> - disable TLS 1.2 / disable TLS 1.3 option (rest are disabled by default
>   anyway)
> - alternative location for pveproxy-ssl.key outside of /etc/pve (PVE
>   only)
> 
> while not strictly required, it probably makes sense to add a/bump the
> versioned dep from pve-manager/pmg-api to patched
> libpve-http-server-perl - nothing should break, but the new options are
> only handled if both packages are updated.
> 

applied the http-server part for now.



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH widget-toolkit 1/1] window: safe destroy: make note more visible

2022-01-13 Thread Thomas Lamprecht
On 13.12.21 09:25, Fabian Ebner wrote:
> by not using a smaller font size and using the pmx-hint class. Also
> don't align to the middle, as everything else is left-aligned.
> 
> Signed-off-by: Fabian Ebner 
> ---
> 
> AFAICT, the only current user is datastore deletion in PBS and IMHO
> it doesn't look worse after these changes.
> 
>  src/window/SafeDestroy.js | 5 +
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH proxmox-perl-rs/common] use calendar-events from rust

2022-01-13 Thread Thomas Lamprecht
On 01.12.21 09:55, Dominik Csapak wrote:
> this series replaces the perl calendar event implementation with the
> one in rust, using proxmox-perl-rs
> 
> the perl interface to 'PVE::CalendarEvent' is the same, but we could
> use PVE::RS::CalendarEvent directly downstream (guest-common/manager)
> but since we need the api type anyway i left that out for now
> 
> with this, we now get all features from the rust implementation
> in perl now, most notably the date part of events which makes it
> possible to have e.g. backups less that once per week (e.g. on the
> first of the month)
> 
> this depends of course on my series to add 'UTC' to the events[0]
> 
> 0: https://lists.proxmox.com/pipermail/pbs-devel/2021-December/004413.html
> 
> proxmox-perl-rs:
> 
> Dominik Csapak (1):
>   pve-rs: add PVE::RS::CalendarEvent
> 
>  pve-rs/Makefile  |  1 +
>  pve-rs/src/calendar_event.rs | 20 
>  pve-rs/src/lib.rs|  1 +
>  3 files changed, 22 insertions(+)
>  create mode 100644 pve-rs/src/calendar_event.rs
> 
> pve-common:
> 
> Dominik Csapak (1):
>   CalendarEvent: use rust implementation
> 
>  src/PVE/CalendarEvent.pm| 251 +---
>  test/calendar_event_test.pl |  42 +++---
>  2 files changed, 23 insertions(+), 270 deletions(-)
> 

Applied with a followup to actually add `proxmox-time` as dependency in 
Cargo.toml to
avoid the following compile error:

   Compiling pve-rs v0.5.0 (/root/sources/pve/proxmox-perl-rs/build/pve-rs)
error[E0433]: failed to resolve: use of undeclared crate or module 
`proxmox_time`
 --> src/calendar_event.rs:9:26
  |
9 | struct CalendarEvent(proxmox_time::CalendarEvent);
  |   use of undeclared crate or module 
`proxmox_time`




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH pve-common 0/4] improve host read_proc_stat

2022-01-13 Thread Thomas Lamprecht
On 10.01.22 05:52, Alexandre Derumier wrote:
> This patch series improve current host cpu stats
> 
> 
> Alexandre Derumier (4):
>   read_proc_stat : initialize newer fields to 0
>   read_proc_stat: substract guest && guest_nice from user && nice time
>   read_proc_stat: add irq/softirq/steal to total used cpu
>   read_proc_stat: use total of fields to compute percentage
> 
>  src/PVE/ProcFSTools.pm | 22 ++
>  1 file changed, 14 insertions(+), 8 deletions(-)
> 

applied series, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager 1/1] window: safe destroy guest: add note that referenced disks are destroyed

2022-01-13 Thread Thomas Lamprecht
On 13.12.21 09:25, Fabian Ebner wrote:
> It's not clear to all users otherwise[0].
> 
> [0]: https://forum.proxmox.com/threads/100996/post-436919
> 
> Signed-off-by: Fabian Ebner 
> ---
>  www/manager6/window/SafeDestroyGuest.js | 2 ++
>  1 file changed, 2 insertions(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH common 1/1] CalendarEvent: use rust implementation

2022-01-13 Thread Thomas Lamprecht
On 01.12.21 09:55, Dominik Csapak wrote:
> by replacing the parsing code and 'compute_next_event' by their
> PVE::RS::CalendarEvent equivalent
> 
> adapt the tests, since we do not have access to the internal structure
> (and even if we had, it would be different) and the error messages
> are different
> 
> the 'compute_next_event' and parsing tests still pass though
> 
> Signed-off-by: Dominik Csapak 
> ---
>  src/PVE/CalendarEvent.pm| 251 +---
>  test/calendar_event_test.pl |  42 +++---
>  2 files changed, 23 insertions(+), 270 deletions(-)
> 
> diff --git a/src/PVE/CalendarEvent.pm b/src/PVE/CalendarEvent.pm
> index 56e9923..e2bf53a 100644
> --- a/src/PVE/CalendarEvent.pm
> +++ b/src/PVE/CalendarEvent.pm
> @@ -6,6 +6,7 @@ use Data::Dumper;
>  use Time::Local;
>  use PVE::JSONSchema;
>  use PVE::Tools qw(trim);
> +use PVE::RS::CalendarEvent;

this is actually not ideal as pve-common is also used in PMG and for some infra 
stuff, so
pve-rs isn't available there everywhere...

hacked around that for now by just dropping the d/control dependency for now, 
as I depend
on the correct pve-rs version in pve-manager directly anyway...




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH docs] pveproxy: document newly added options

2022-01-13 Thread Thomas Lamprecht
On 17.12.21 13:57, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler 
> ---
>  pveproxy.adoc | 30 +-
>  1 file changed, 29 insertions(+), 1 deletion(-)
> 
>

applied this and the manager part, thanks! I did not address any of the nits 
mentioned by
stoiko (thx for the review), so we'd need follow ups for those.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 qemu-server 1/3] vmstatus: add hostcpu value

2022-01-13 Thread Alexandre Derumier
---
 PVE/QemuServer.pm | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0071a06..65115ba 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2877,8 +2877,11 @@ sub vmstatus {
 
my $pstat = PVE::ProcFSTools::read_proc_pid_stat($pid);
next if !$pstat; # not running
+   my $cgroups = PVE::QemuServer::CGroup->new($vmid);
+   my $hostcpustat = $cgroups->get_cpu_stat();
 
my $used = $pstat->{utime} + $pstat->{stime};
+   my $hostused = $hostcpustat->{utime} + $hostcpustat->{stime};
 
$d->{uptime} = int(($uptime - $pstat->{starttime})/$cpuinfo->{user_hz});
 
@@ -2892,6 +2895,9 @@ sub vmstatus {
time => $ctime,
used => $used,
cpu => 0,
+   hostused => $hostused,
+   hostcpu => 0,
+
};
next;
}
@@ -2900,15 +2906,20 @@ sub vmstatus {
 
if ($dtime > 1000) {
my $dutime = $used -  $old->{used};
+   my $dhostutime = $hostused -  $old->{hostused};
 
$d->{cpu} = (($dutime/$dtime)* $cpucount) / $d->{cpus};
+   $d->{hostcpu} = (($dhostutime/$dtime)* $cpucount) / $d->{cpus};
$last_proc_pid_stat->{$pid} = {
time => $ctime,
used => $used,
cpu => $d->{cpu},
+   hostused => $hostused,
+   hostcpu => $d->{hostcpu},
};
} else {
$d->{cpu} = $old->{cpu};
+   $d->{hostcpu} = $old->{hostcpu};
}
 }
 
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 qemu-server 2/3] vmstatus: add hostmem value

2022-01-13 Thread Alexandre Derumier
---
 PVE/QemuServer.pm | 5 +
 1 file changed, 5 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 65115ba..6d4027b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2888,6 +2888,11 @@ sub vmstatus {
if ($pstat->{vsize}) {
$d->{mem} = int(($pstat->{rss}/$pstat->{vsize})*$d->{maxmem});
}
+   if (defined(my $hostmemstat = $cgroups->get_memory_stat())) {
+   $d->{hostmem} = $hostmemstat->{mem};
+   } else {
+   $d->{hostmem} = 0;
+   }
 
my $old = $last_proc_pid_stat->{$pid};
if (!$old) {
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 qemu-server 3/3] vmstatus: add pressure stats

2022-01-13 Thread Alexandre Derumier
---
 PVE/QemuServer.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6d4027b..e4b6765 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2926,6 +2926,8 @@ sub vmstatus {
$d->{cpu} = $old->{cpu};
$d->{hostcpu} = $old->{hostcpu};
}
+
+   $d->{pressure} = $cgroups->get_pressure_stat();
 }
 
 return $res if !$full;
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 qemu-server 0/3] vmstatus: add pressure + hostcpu/hostmem

2022-01-13 Thread Alexandre Derumier
Hi,this is a resend of patches from last year.

vm pressure stats && true host cpu/mem usage is really needed to implement 
correct
balancing.


This add new cgroups value stats.

hostcpu/hostmem give the real cpu/mem usage of a vm, including vhost-net.


Changelog v3:

  - rebase on last master

Changelog v2:
 - rebase on last master
 - use new pressure code from pve-common


Alexandre Derumier (3):
  vmstatus: add hostcpu value
  vmstatus: add hostmem value
  vmstatus: add pressure stats

 PVE/QemuServer.pm | 18 ++
 1 file changed, 18 insertions(+)

-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel