Am 17.01.22 um 16:43 schrieb Fabian Grünbichler:
On January 13, 2022 11:08 am, Fabian Ebner wrote:
Extend qm importdisk/importovf functionality to the API.


Used Dominic's latest version[0] as a starting point. GUI part still
needs to be rebased/updated, so it's not included here.


Changes from v9:

* Split patch into smaller parts

* Some minor (style) fixes/improvements (see individual patches)

* Drop $manifest_only parameter for parse_ovf

     Instead, untaint the path when calling file_size_info, which makes
     the call also work via API which uses the -T switch. If we do want
     to keep $manifest_only, I'd argue that it should also skip the
     file existence check and not only the file size check. Opinions?

* Re-use do_import function

     Rather than duplicating most of it. The down side is the need to
     add a new parameter for skipping configuration update. But I
     suppose the plan is to have qm import switch to the new API at
     some point, and then do_import can be simplified.

* Instead of adding an import-sources parameter to the API, use a new
   import-from property for the disk, that's only available with
   import/alloc-enabled API endpoints via its own version of the schema

     Avoids the split across regular drive key parameters and
     'import_soruces', which avoids quite a bit of cross-checking
     between the two and parsing/passing around the latter.

     The big downsides are:
     * Schema handling is a bit messy.
     * Need to special case print_drive, because we do intermediate
       parse/print to cleanup drive paths. At a first glance, this
       seems not too easy to avoid without complicating things elsewehere.
     * Using the import-aware parse_drive in parse_volume, because that
       is used via the foreach_volume iterators handling the parameters
       of the import-enabled endpoints. Could be avoided by using for
       loops with the import-aware parse_drive instead of
       foreach_volume.

     Counter-arguments for using a single schema (citing Fabian G.):
     * docs/schema dump/api docs: shouldn't look like you can put that
       everywhere where we use the config schema
     * shouldn't have nasty side-effects if someone puts it into the
       config


After all, the 'import-from' disk property approach turned out to be
a uglier than I hoped it would.

My problem with the 'import-sources' API parameter approach (see [0]
for details) is that it requires specifying both
     scsi0: <target storage>:-1,<properties>
     import-sources: scsi0=/path/or/volid/for/source
leading to a not ideal user interface and parameter handling needing
cross-checks to verify that the right combination is specified, and
passing both around at the same time.

Another approach would be adding a new special volid syntax using
     my $IMPORT_DISK_RE = qr!^(([^/:\s]+):)import:(.*)$!;
allowing for e.g.
     qm set 126 -scsi1 rbdkvm:import:myzpool:vm-114-disk-0,aio=native
     qm set 126 -scsi2 rbdkvm:import:/dev/zvol/myzpool/vm-114-disk-1,backup=0
Yes, it's a hack, but it would avoid the pain points from both other
approaches and be very simple. See the end of the mail for a POC.

the biggest down-side is that this 'invades' the storage plugin-owned
volume ID namespace (further than the pre-existing, documented new disk
syntax) - a volume ID is defined as anything starting with a storage
identifier, followed by ':', followed by an arbitrary string (where the
semantics of that string are up to the plugin).

while I don't think there is a storage plugin out there that somehow
starts its volume IDs with 'import:' by default, it still doesn't feel
very nice.. there might be plugins that are not very strict in their
parsing that might accept such an 'import volid' as regular volid, and
parse the latter part (which can be a regular volid after all!) instead
of the full thing, which would make code paths that should not accept
import volids accept them and just use the source without importing,
with potentially disastrous consequences :-/


Ok, good point!

that being said - other then this bigger conceptual question I only
found nits/fixup/followup material - so if we want to take the current
RFC I'd give this a more in-depth test spin in the next few days and
apply with the small stuff adressed ;)


Fine by me. I could also re-spin with the import-sources API parameter if that's preferred. If we go with the import-from approach, besides addressing your nits, I'd also not make parse_volume unconditionally parse with the extended schema, but replace the foreach_volume iterators in the appropriate places.



[0]: https://lists.proxmox.com/pipermail/pve-devel/2021-June/048564.html


pve-manager:

Fabian Ebner (1):
   api: nodes: add readovf endpoint

  PVE/API2/Nodes.pm | 7 +++++++
  1 file changed, 7 insertions(+)


qemu-server:

Dominic Jäger (1):
   api: support VM disk import

Fabian Ebner (6):
   schema: add pve-volume-id-or-absolute-path
   parse ovf: untaint path when calling file_size_info
   api: add endpoint for parsing .ovf files
   image convert: allow block device as source
   schema: drive: use separate schema when disk allocation is possible
   api: create disks: factor out common part from if/else

  PVE/API2/Qemu.pm             | 86 +++++++++++++++++++++++----------
  PVE/API2/Qemu/Makefile       |  2 +-
  PVE/API2/Qemu/OVF.pm         | 55 +++++++++++++++++++++
  PVE/QemuConfig.pm            |  2 +-
  PVE/QemuServer.pm            | 57 +++++++++++++++++++---
  PVE/QemuServer/Drive.pm      | 92 +++++++++++++++++++++++++++---------
  PVE/QemuServer/ImportDisk.pm |  2 +-
  PVE/QemuServer/OVF.pm        |  9 ++--
  8 files changed, 242 insertions(+), 63 deletions(-)
  create mode 100644 PVE/API2/Qemu/OVF.pm

--
2.30.2

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 6992f6f..6a22899 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -21,8 +21,9 @@ use PVE::ReplicationConfig;
  use PVE::GuestHelpers;
  use PVE::QemuConfig;
  use PVE::QemuServer;
-use PVE::QemuServer::Drive;
  use PVE::QemuServer::CPUConfig;
+use PVE::QemuServer::Drive;
+use PVE::QemuServer::ImportDisk;
  use PVE::QemuServer::Monitor qw(mon_cmd);
  use PVE::QemuServer::Machine;
  use PVE::QemuMigrate;
@@ -64,6 +65,7 @@ my $resolve_cdrom_alias = sub {
  };
my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
+my $IMPORT_DISK_RE = qr!^(([^/:\s]+):)import:(.*)$!;
  my $check_storage_access = sub {
     my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = 
@_;
@@ -86,6 +88,9 @@ my $check_storage_access = sub {
            my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
            raise_param_exc({ storage => "storage '$storeid' does not support vm 
images"})
                if !$scfg->{content}->{images};
+       } elsif (!$isCDROM && ($volid =~ $IMPORT_DISK_RE)) {
+           warn "check access matched: $2 and $3\n";
+           warn "TODO implement import access check!\n";
        } else {
            PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, 
$vmid, $volid);
        }
@@ -212,6 +217,29 @@ my $create_disks = sub {
            $disk->{size} = PVE::Tools::convert_size($size, 'kb' => 'b');
            delete $disk->{format}; # no longer needed
            $res->{$ds} = PVE::QemuServer::print_drive($disk);
+       } elsif ($volid =~ $IMPORT_DISK_RE) {
+           my ($storeid, $source) = ($2, $3);
+
+           $source = PVE::Storage::abs_filesystem_path($storecfg, $source, 1);
+           my $src_size = PVE::Storage::file_size_info($source);
+           die "Could not get file size of $source" if !defined($src_size);
+
+           my (undef, $dst_volid) = PVE::QemuServer::ImportDisk::do_import(
+               $source,
+               $vmid,
+               $storeid,
+               {
+                   drive_name => $ds,
+                   format => $disk->{format},
+                   'skip-config-update' => 1,
+               },
+           );
+
+           push @$vollist, $dst_volid;
+           $disk->{file} = $dst_volid;
+           $disk->{size} = $src_size;
+           delete $disk->{format}; # no longer needed
+           $res->{$ds} = PVE::QemuServer::print_drive($disk);
        } else {
PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, $vmid, $volid);
diff --git a/PVE/QemuServer/ImportDisk.pm b/PVE/QemuServer/ImportDisk.pm
index 51ad52e..7557cac 100755
--- a/PVE/QemuServer/ImportDisk.pm
+++ b/PVE/QemuServer/ImportDisk.pm
@@ -71,7 +71,7 @@ sub do_import {
        PVE::Storage::activate_volumes($storecfg, [$dst_volid]);
        PVE::QemuServer::qemu_img_convert($src_path, $dst_volid, $src_size, 
undef, $zeroinit);
        PVE::Storage::deactivate_volumes($storecfg, [$dst_volid]);
-       PVE::QemuConfig->lock_config($vmid, $create_drive);
+       PVE::QemuConfig->lock_config($vmid, $create_drive) if 
!$params->{'skip-config-update'};
      };
      if (my $err = $@) {
        eval { PVE::Storage::vdisk_free($storecfg, $dst_volid) };
--
2.30.2


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to