[pve-devel] [PATCH pve-storage] fix #1611: implement import of base-images for LVM-thin Storage

2023-10-16 Thread Hannes Duerr
if a base-image is to be migrated to a lvm-thin storage, a new
vm-image is allocated on the target side, then the data is written
and afterwards the image is converted to a base-image


Signed-off-by: Hannes Duerr 
---

In the bugtracker wolfgang suggested two different approaches. In my
opinion this approach is the cleaner one, but please let me know what
you think

 src/PVE/Storage/LvmThinPlugin.pm | 65 
 1 file changed, 65 insertions(+)

diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
index 1d2e37c..4579d47 100644
--- a/src/PVE/Storage/LvmThinPlugin.pm
+++ b/src/PVE/Storage/LvmThinPlugin.pm
@@ -383,6 +383,71 @@ sub volume_has_feature {
 return undef;
 }
 
+sub volume_import {
+my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, 
$base_snapshot, $with_snapshots, $allow_rename) = @_;
+die "volume import format $format not available for $class\n"
+   if $format ne 'raw+size';
+die "cannot import volumes together with their snapshots in $class\n"
+   if $with_snapshots;
+die "cannot import an incremental stream in $class\n" if 
defined($base_snapshot);
+
+my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
+   $class->parse_volname($volname);
+die "cannot import format $format into a file of format $file_format\n"
+   if $file_format ne 'raw';
+
+my $vg = $scfg->{vgname};
+my $lvs = PVE::Storage::LVMPlugin::lvm_list_volumes($vg);
+if ($lvs->{$vg}->{$volname}) {
+   die "volume $vg/$volname already exists\n" if !$allow_rename;
+   warn "volume $vg/$volname already exists - importing with a different 
name\n";
+   $name = undef;
+}
+
+my ($size) = PVE::Storage::Plugin::read_common_header($fh);
+$size = int($size/1024);
+
+# Request new vm-name which is needed for the import
+if ($isBase) {
+   my $newvmname = $class->find_free_diskname($storeid, $scfg, $vmid);
+   $name = $newvmname;
+   $volname = $newvmname;
+}
+
+eval {
+   my $allocname = $class->alloc_image($storeid, $scfg, $vmid, 'raw', 
$name, $size);
+   my $oldname = $volname;
+   $volname = $allocname;
+   if (defined($name) && $allocname ne $oldname) {
+   die "internal error: unexpected allocated name: '$allocname' != 
'$oldname'\n";
+   }
+   my $file = $class->path($scfg, $volname, $storeid)
+   or die "internal error: failed to get path to newly allocated 
volume $volname\n";
+
+   $class->volume_import_write($fh, $file);
+};
+if (my $err = $@) {
+   my $cleanup_worker = eval { $class->free_image($storeid, $scfg, 
$volname, 0) };
+   warn $@ if $@;
+
+   if ($cleanup_worker) {
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+
+   $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
+   }
+
+   die $err;
+}
+
+if ($isBase) {
+   my $newbasename = $class->create_base($storeid, $scfg, $volname);
+   $volname=$newbasename;
+}
+
+return "$storeid:$volname";
+}
+
 # used in LVMPlugin->volume_import
 sub volume_import_write {
 my ($class, $input_fh, $output_file) = @_;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-10-25 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 12 
 PVE/QemuServer/Drive.pm | 26 ++
 2 files changed, 38 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2cd8948..69be3af 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1482,6 +1482,18 @@ sub print_drivedevice_full {
}
$device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
 
+   # only scsi-hd supports passing vendor and product information
+   if ($devicetype eq 'hd') {
+   if (my $vendor = $drive->{vendor}) {
+   $vendor = URI::Escape::uri_unescape($vendor);
+   $device .= ",vendor=$vendor";
+   }
+   if (my $product = $drive->{product}) {
+   $product = URI::Escape::uri_unescape($product);
+   $device .= ",product=$product";
+   }
+   }
+
 } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
my $maxdev = ($drive->{interface} eq 'sata') ? 
$PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
my $controller = int($drive->{index} / $maxdev);
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index e24ba12..20efc2f 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -159,6 +159,28 @@ my %iothread_fmt = ( iothread => {
optional => 1,
 });
 
+my %product_fmt = (
+product => {
+   type => 'string',
+   format => 'urlencoded',
+   format_description => 'product',
+   maxLength => 40*3, # *3 since it's %xx url enoded
+   description => "The drive's product name, url-encoded, up to 40 bytes 
long.",
+   optional => 1,
+},
+);
+
+my %vendor_fmt = (
+vendor => {
+   type => 'string',
+   format => 'urlencoded',
+   format_description => 'vendor',
+   maxLength => 40*3, # *3 since it's %xx url enoded
+   description => "The drive's vendor name, url-encoded, up to 40 bytes 
long.",
+   optional => 1,
+},
+);
+
 my %model_fmt = (
 model => {
type => 'string',
@@ -281,6 +303,8 @@ my $scsi_fmt = {
 %scsiblock_fmt,
 %ssd_fmt,
 %wwn_fmt,
+%vendor_fmt,
+%product_fmt,
 };
 my $scsidesc = {
 optional => 1,
@@ -404,6 +428,8 @@ my $alldrive_fmt = {
 %readonly_fmt,
 %scsiblock_fmt,
 %ssd_fmt,
+%vendor_fmt,
+%product_fmt,
 %wwn_fmt,
 %tpmversion_fmt,
 %efitype_fmt,
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 qemu-server] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-08 Thread Hannes Duerr
adds vendor and product information for SCSI devices to the json schema and
checks in the VM create/update API call if it is possible to add these to QEMU 
as a device option

Signed-off-by: Hannes Duerr 
---

changes in v2:
- when calling the API to create/update a VM, check whether the devices
are "scsi-hd" or "scsi-cd" devices,where there is the option to add
vendor and product information, if not error out
- change the format in product_fmt and vendor_fmt to a pattern that only
allows 40 characters consisting of upper and lower case letters, numbers and 
'-' and '_'.

 PVE/API2/Qemu.pm|  9 +
 PVE/QemuServer.pm   | 83 +
 PVE/QemuServer/Drive.pm | 24 
 3 files changed, 92 insertions(+), 24 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 38bdaab..6898ec9 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1030,6 +1030,11 @@ __PACKAGE__->register_method({
);
$conf->{$_} = $created_opts->{$_} for keys 
$created_opts->%*;
 
+   foreach my $opt (keys $created_opts->%*) {
+   if ($opt =~ m/scsi/) {
+   
PVE::QemuServer::check_scsi_feature_compatibility($opt, $created_opts, $conf, 
$storecfg, $param);
+   }
+   }
if (!$conf->{boot}) {
my $devs = 
PVE::QemuServer::get_default_bootdevices($conf);
$conf->{boot} = PVE::QemuServer::print_bootorder($devs);
@@ -1840,6 +1845,10 @@ my $update_vm_api  = sub {
);
$conf->{pending}->{$_} = $created_opts->{$_} for keys 
$created_opts->%*;
 
+   if ($opt =~ m/scsi/) {
+   PVE::QemuServer::check_scsi_feature_compatibility($opt, 
$created_opts, $conf, $storecfg, $param);
+   }
+
# default legacy boot order implies all cdroms anyway
if (@bootorder) {
# append new CD drives to bootorder to mark them 
bootable
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index dbcd568..919728b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -26,6 +26,7 @@ use Storable qw(dclone);
 use Time::HiRes qw(gettimeofday usleep);
 use URI::Escape;
 use UUID;
+use Data::Dumper;
 
 use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file);
 use PVE::CGroup;
@@ -1428,6 +1429,53 @@ my sub get_drive_id {
 return "$drive->{interface}$drive->{index}";
 }
 
+sub get_scsi_devicetype {
+my ($drive, $storecfg, $machine_type) = @_;
+
+my $devicetype = 'hd';
+my $path = '';
+if (drive_is_cdrom($drive)) {
+   $devicetype = 'cd';
+} else {
+   if ($drive->{file} =~ m|^/|) {
+   $path = $drive->{file};
+   if (my $info = path_is_scsi($path)) {
+   if ($info->{type} == 0 && $drive->{scsiblock}) {
+   $devicetype = 'block';
+   } elsif ($info->{type} == 1) { # tape
+   $devicetype = 'generic';
+   }
+   }
+   } else {
+   $path = PVE::Storage::path($storecfg, $drive->{file});
+   }
+
+   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
+   my $version = kvm_user_version();
+   $version = extract_version($machine_type, $version);
+   if ($path =~ m/^iscsi\:\/\// &&
+  !min_version($version, 4, 1)) {
+   $devicetype = 'generic';
+   }
+}
+
+return $devicetype;
+}
+
+sub check_scsi_feature_compatibility {
+my($opt, $created_opts, $conf, $storecfg, $param) = @_;
+
+my $drive = parse_drive($opt, $created_opts->{$opt});
+my $machine_type = get_vm_machine($conf, undef, $conf->{arch});
+my $drivetype = get_scsi_devicetype($drive, $storecfg, $machine_type);
+
+if ($drivetype ne 'hd' && $drivetype ne 'cd') {
+   if ($param->{$opt} =~ m/vendor/ || $param->{$opt} =~ m/product/) {
+   die "only 'scsi-hd' and 'scsi-cd' devices support passing vendor 
and product information\n";
+   }
+}
+}
+
 sub print_drivedevice_full {
 my ($storecfg, $conf, $vmid, $drive, $bridges, $arch, $machine_type) = @_;
 
@@ -1443,31 +1491,8 @@ sub print_drivedevice_full {
 
my ($maxdev, $controller, $controller_prefix) = scsihw_infos($conf, 
$drive);
my $unit = $drive->{index} % $maxdev;
-   my $devicetype = 'hd';
-   my $path = '';
-   if (drive_is_cdrom($drive)) {
-   $devicetype = 'cd';
-   } else {
-   if ($drive->{file} =~ m|^/|) {
-   $path = $

[pve-devel] [PATCH v3 qemu-server 1/2] Create get_scsi_devicetype and move it and its dependencies to QemuServer/Drive.pm

2023-11-10 Thread Hannes Duerr
Encapsulation of the functionality for determining the scsi device type in a 
new function
for reusability and moving the function and its dependencies
to Qemuserver/Drive.qm for a better overview

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 87 ++---
 PVE/QemuServer/Drive.pm | 95 +
 2 files changed, 98 insertions(+), 84 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index dbcd568..9a83021 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1339,66 +1339,6 @@ sub pve_verify_hotplug_features {
 die "unable to parse hotplug option\n";
 }
 
-sub scsi_inquiry {
-my($fh, $noerr) = @_;
-
-my $SG_IO = 0x2285;
-my $SG_GET_VERSION_NUM = 0x2282;
-
-my $versionbuf = "\x00" x 8;
-my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
-if (!$ret) {
-   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
-   return;
-}
-my $version = unpack("I", $versionbuf);
-if ($version < 3) {
-   die "scsi generic interface too old\n"  if !$noerr;
-   return;
-}
-
-my $buf = "\x00" x 36;
-my $sensebuf = "\x00" x 8;
-my $cmd = pack("C x3 C x1", 0x12, 36);
-
-# see /usr/include/scsi/sg.h
-my $sg_io_hdr_t = "i i C C s I P P P I I i P C C C C S S i I I";
-
-my $packet = pack(
-   $sg_io_hdr_t, ord('S'), -3, length($cmd), length($sensebuf), 0, 
length($buf), $buf, $cmd, $sensebuf, 6000
-);
-
-$ret = ioctl($fh, $SG_IO, $packet);
-if (!$ret) {
-   die "scsi ioctl SG_IO failed - $!\n" if !$noerr;
-   return;
-}
-
-my @res = unpack($sg_io_hdr_t, $packet);
-if ($res[17] || $res[18]) {
-   die "scsi ioctl SG_IO status error - $!\n" if !$noerr;
-   return;
-}
-
-my $res = {};
-$res->@{qw(type removable vendor product revision)} = unpack("C C x6 A8 
A16 A4", $buf);
-
-$res->{removable} = $res->{removable} & 128 ? 1 : 0;
-$res->{type} &= 0x1F;
-
-return $res;
-}
-
-sub path_is_scsi {
-my ($path) = @_;
-
-my $fh = IO::File->new("+<$path") || return;
-my $res = scsi_inquiry($fh, 1);
-close($fh);
-
-return $res;
-}
-
 sub print_tabletdevice_full {
 my ($conf, $arch) = @_;
 
@@ -1443,31 +1383,10 @@ sub print_drivedevice_full {
 
my ($maxdev, $controller, $controller_prefix) = scsihw_infos($conf, 
$drive);
my $unit = $drive->{index} % $maxdev;
-   my $devicetype = 'hd';
-   my $path = '';
-   if (drive_is_cdrom($drive)) {
-   $devicetype = 'cd';
-   } else {
-   if ($drive->{file} =~ m|^/|) {
-   $path = $drive->{file};
-   if (my $info = path_is_scsi($path)) {
-   if ($info->{type} == 0 && $drive->{scsiblock}) {
-   $devicetype = 'block';
-   } elsif ($info->{type} == 1) { # tape
-   $devicetype = 'generic';
-   }
-   }
-   } else {
-$path = PVE::Storage::path($storecfg, $drive->{file});
-   }
 
-   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
-   my $version = extract_version($machine_type, kvm_user_version());
-   if ($path =~ m/^iscsi\:\/\// &&
-  !min_version($version, 4, 1)) {
-   $devicetype = 'generic';
-   }
-   }
+   my $machine_version = extract_version($machine_type, 
kvm_user_version());
+   my $devicetype  = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
 
if (!$conf->{scsihw} || $conf->{scsihw} =~ m/^lsi/ || $conf->{scsihw} 
eq 'pvscsi') {
$device = 
"scsi-$devicetype,bus=$controller_prefix$controller.0,scsi-id=$unit";
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index e24ba12..7056daa 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -15,6 +15,7 @@ is_valid_drivename
 drive_is_cloudinit
 drive_is_cdrom
 drive_is_read_only
+get_scsi_devicetype
 parse_drive
 print_drive
 );
@@ -760,4 +761,98 @@ sub resolve_first_disk {
 return;
 }
 
+sub scsi_inquiry {
+my($fh, $noerr) = @_;
+
+my $SG_IO = 0x2285;
+my $SG_GET_VERSION_NUM = 0x2282;
+
+my $versionbuf = "\x00" x 8;
+my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
+if (!$ret) {
+   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
+   return;
+}
+my $version = unpack("I", $versionbuf);
+if ($version < 3) {
+   die "scsi generic interface too old\n"  if !$noerr;
+   return;
+}
+
+m

[pve-devel] [PATCH v3 qemu-server 0/2] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-10 Thread Hannes Duerr
changes in v2:
- when calling the API to create/update a VM, check whether the devices
are "scsi-hd" or "scsi-cd" devices,where there is the option to add
vendor and product information, if not error out
- change the format in product_fmt and vendor_fmt to a pattern that only
allows 40 characters consisting of upper and lower case letters, numbers and 
'-' and '_'.

changes in v3:
- splitup into preparation and fix patch
- move get_scsi_devicetype into QemuServer/Drive.pm
- refactor check_scsi_feature_compatibility to assert_scsi_feature_compatibility
- assert_scsi_feature_compatibility before creating the device
- handle 'local-lvm:' syntax in get_scsi_devicetype
- fix style issues

Hannes Duerr (2):
  Create get_scsi_devicetype and move it and its dependencies to
QemuServer/Drive.pm
  fix #4957: add vendor and product information passthrough for
SCSI-Disks

 PVE/API2/Qemu.pm|  12 
 PVE/QemuServer.pm   | 115 +++---
 PVE/QemuServer/Drive.pm | 119 
 3 files changed, 162 insertions(+), 84 deletions(-)

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 qemu-server 2/2] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-10 Thread Hannes Duerr
adds vendor and product information for SCSI devices to the json schema and
checks in the VM create/update API call if it is possible to add these to QEMU 
as a device option

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 12 
 PVE/QemuServer.pm   | 28 
 PVE/QemuServer/Drive.pm | 24 
 3 files changed, 64 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 38bdaab..9d8171a 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1013,6 +1013,13 @@ __PACKAGE__->register_method({
my $conf = $param;
my $arch = PVE::QemuServer::get_vm_arch($conf);
 
+   for my $opt (sort keys $param->%*) {
+   if ($opt =~ m/scsi/) {
+   PVE::QemuServer::assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+   }
+
$conf->{meta} = PVE::QemuServer::new_meta_info_string();
 
my $vollist = [];
@@ -1828,6 +1835,11 @@ my $update_vm_api  = sub {
PVE::QemuServer::vmconfig_register_unused_drive($storecfg, 
$vmid, $conf, PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt}))
if defined($conf->{pending}->{$opt});
 
+   if ($opt =~ m/scsi/) {
+   PVE::QemuServer::assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+
my (undef, $created_opts) = $create_disks->(
$rpcenv,
$authuser,
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9a83021..9c998d6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1368,6 +1368,24 @@ my sub get_drive_id {
 return "$drive->{interface}$drive->{index}";
 }
 
+sub assert_scsi_feature_compatibility {
+my ($opt, $conf, $storecfg, $drive_attributes) = @_;
+
+my $drive = parse_drive($opt, $drive_attributes);
+
+my $machine_type = get_vm_machine($conf, undef, $conf->{arch});
+my $machine_version = extract_version($machine_type, kvm_user_version());
+my $drivetype = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
+
+if ($drivetype ne 'hd' && $drivetype ne 'cd') {
+   if ($drive_attributes =~ m/vendor/ || $drive_attributes =~ m/product/) {
+   die "only 'scsi-hd' and 'scsi-cd' devices".
+   "support passing vendor and product information\n";
+   }
+}
+}
+
 sub print_drivedevice_full {
 my ($storecfg, $conf, $vmid, $drive, $bridges, $arch, $machine_type) = @_;
 
@@ -1401,6 +1419,16 @@ sub print_drivedevice_full {
}
$device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
 
+   # only scsi-hd and scsi-cd support passing vendor and product 
information
+   if ($devicetype eq 'hd' || $devicetype eq 'cd') {
+   if (my $vendor = $drive->{vendor}) {
+   $device .= ",vendor=$vendor";
+   }
+   if (my $product = $drive->{product}) {
+   $device .= ",product=$product";
+   }
+   }
+
 } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
my $maxdev = ($drive->{interface} eq 'sata') ? 
$PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
my $controller = int($drive->{index} / $maxdev);
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 7056daa..66a4816 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -160,6 +160,26 @@ my %iothread_fmt = ( iothread => {
optional => 1,
 });
 
+my %product_fmt = (
+product => {
+   type => 'string',
+   pattern => '[A-Za-z0-9\-_]{,40}',
+   format_description => 'product',
+   description => "The drive's product name, up to 40 bytes long.",
+   optional => 1,
+},
+);
+
+my %vendor_fmt = (
+vendor => {
+   type => 'string',
+   pattern => '[A-Za-z0-9\-_]{,40}',
+   format_description => 'vendor',
+   description => "The drive's vendor name, up to 40 bytes long.",
+   optional => 1,
+},
+);
+
 my %model_fmt = (
 model => {
type => 'string',
@@ -277,10 +297,12 @@ PVE::JSONSchema::register_standard_option("pve-qm-ide", 
$idedesc);
 my $scsi_fmt = {
 %drivedesc_base,
 %iothread_fmt,
+%product_fmt,
 %queues_fmt,
 %readonly_fmt,
 %scsiblock_fmt,
 %ssd_fmt,
+%vendor_fmt,
 %wwn_fmt,
 };
 my $scsidesc = {
@@ -401,10 +423,12 @@ my $alldrive_fmt = {
 

[pve-devel] [PATCH v4 qemu-server 3/4] drive: Create get_scsi_devicetype

2023-11-17 Thread Hannes Duerr
Encapsulation of the functionality for determining the scsi device type in a 
new function
for reusability in QemuServer/Drive.pm

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 29 -
 PVE/QemuServer/Drive.pm | 35 ++-
 2 files changed, 38 insertions(+), 26 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 294702d..6090f91 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -53,7 +53,7 @@ use PVE::QemuServer::Helpers qw(config_aware_timeout 
min_version windows_version
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
-use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive path_is_scsi);
+use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory qw(get_current_memory);
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -1386,31 +1386,10 @@ sub print_drivedevice_full {
 
my ($maxdev, $controller, $controller_prefix) = scsihw_infos($conf, 
$drive);
my $unit = $drive->{index} % $maxdev;
-   my $devicetype = 'hd';
-   my $path = '';
-   if (drive_is_cdrom($drive)) {
-   $devicetype = 'cd';
-   } else {
-   if ($drive->{file} =~ m|^/|) {
-   $path = $drive->{file};
-   if (my $info = path_is_scsi($path)) {
-   if ($info->{type} == 0 && $drive->{scsiblock}) {
-   $devicetype = 'block';
-   } elsif ($info->{type} == 1) { # tape
-   $devicetype = 'generic';
-   }
-   }
-   } else {
-$path = PVE::Storage::path($storecfg, $drive->{file});
-   }
 
-   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
-   my $version = extract_version($machine_type, kvm_user_version());
-   if ($path =~ m/^iscsi\:\/\// &&
-  !min_version($version, 4, 1)) {
-   $devicetype = 'generic';
-   }
-   }
+   my $machine_version = extract_version($machine_type, 
kvm_user_version());
+   my $devicetype  = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
 
if (!$conf->{scsihw} || $conf->{scsihw} =~ m/^lsi/ || $conf->{scsihw} 
eq 'pvscsi') {
$device = 
"scsi-$devicetype,bus=$controller_prefix$controller.0,scsi-id=$unit";
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 6d94a2f..de62d43 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -15,9 +15,9 @@ is_valid_drivename
 drive_is_cloudinit
 drive_is_cdrom
 drive_is_read_only
+get_scsi_devicetype
 parse_drive
 print_drive
-path_is_scsi
 );
 
 our $QEMU_FORMAT_RE = qr/raw|cow|qcow|qcow2|qed|vmdk|cloop/;
@@ -822,4 +822,37 @@ sub path_is_scsi {
 return $res;
 }
 
+sub get_scsi_devicetype {
+my ($drive, $storecfg, $machine_version) = @_;
+
+my $devicetype = 'hd';
+my $path = '';
+if (drive_is_cdrom($drive)) {
+   $devicetype = 'cd';
+} else {
+   if ($drive->{file} =~ m|^/|) {
+   $path = $drive->{file};
+   if (my $info = path_is_scsi($path)) {
+   if ($info->{type} == 0 && $drive->{scsiblock}) {
+   $devicetype = 'block';
+   } elsif ($info->{type} == 1) { # tape
+   $devicetype = 'generic';
+   }
+   }
+   } elsif ($drive->{file} =~ $NEW_DISK_RE){
+   # special syntax cannot be parsed to path
+   return $devicetype;
+   } else {
+   $path = PVE::Storage::path($storecfg, $drive->{file});
+   }
+
+   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
+   if ($path =~ m/^iscsi\:\/\// &&
+  !min_version($machine_version, 4, 1)) {
+   $devicetype = 'generic';
+   }
+}
+
+return $devicetype;
+}
 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 0/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-17 Thread Hannes Duerr
changes in v2:
- when calling the API to create/update a VM, check whether the devices
are "scsi-hd" or "scsi-cd" devices,where there is the option to add
vendor and product information, if not error out
- change the format in product_fmt and vendor_fmt to a pattern that only
allows 40 characters consisting of upper and lower case letters, numbers and 
'-' and '_'.

changes in v3:
- splitup into preparation and fix patch
- move get_scsi_devicetype into QemuServer/Drive.pm
- refactor check_scsi_feature_compatibility to assert_scsi_feature_compatibility
- assert_scsi_feature_compatibility before creating the device
- handle 'local-lvm:' syntax in get_scsi_devicetype
- fix style issues

changes in v4:
- create assert_scsi_feature_compatibility() in API2/Qemu.pm
- divide the preparation into smaller steps
- remove or harden brittle regex
- fix wrong storagename assumption

Hannes Duerr (4):
  Move path_is_scsi to QemuServer/Drive.pm
  Move NEW_DISK_RE to QemuServer/Drive.pm
  drive: Create get_scsi_devicetype
  fix #4957: add vendor and product information passthrough for
SCSI-Disks

 PVE/API2/Qemu.pm|  49 +++--
 PVE/QemuServer.pm   | 100 +
 PVE/QemuServer/Drive.pm | 119 
 3 files changed, 177 insertions(+), 91 deletions(-)

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 1/4] Move path_is_scsi to QemuServer/Drive.pm

2023-11-17 Thread Hannes Duerr
Prepare for introduction of new helper

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 62 +
 PVE/QemuServer/Drive.pm | 61 
 2 files changed, 62 insertions(+), 61 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c465fb6..294702d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -53,7 +53,7 @@ use PVE::QemuServer::Helpers qw(config_aware_timeout 
min_version windows_version
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
-use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive);
+use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive path_is_scsi);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory qw(get_current_memory);
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -1342,66 +1342,6 @@ sub pve_verify_hotplug_features {
 die "unable to parse hotplug option\n";
 }
 
-sub scsi_inquiry {
-my($fh, $noerr) = @_;
-
-my $SG_IO = 0x2285;
-my $SG_GET_VERSION_NUM = 0x2282;
-
-my $versionbuf = "\x00" x 8;
-my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
-if (!$ret) {
-   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
-   return;
-}
-my $version = unpack("I", $versionbuf);
-if ($version < 3) {
-   die "scsi generic interface too old\n"  if !$noerr;
-   return;
-}
-
-my $buf = "\x00" x 36;
-my $sensebuf = "\x00" x 8;
-my $cmd = pack("C x3 C x1", 0x12, 36);
-
-# see /usr/include/scsi/sg.h
-my $sg_io_hdr_t = "i i C C s I P P P I I i P C C C C S S i I I";
-
-my $packet = pack(
-   $sg_io_hdr_t, ord('S'), -3, length($cmd), length($sensebuf), 0, 
length($buf), $buf, $cmd, $sensebuf, 6000
-);
-
-$ret = ioctl($fh, $SG_IO, $packet);
-if (!$ret) {
-   die "scsi ioctl SG_IO failed - $!\n" if !$noerr;
-   return;
-}
-
-my @res = unpack($sg_io_hdr_t, $packet);
-if ($res[17] || $res[18]) {
-   die "scsi ioctl SG_IO status error - $!\n" if !$noerr;
-   return;
-}
-
-my $res = {};
-$res->@{qw(type removable vendor product revision)} = unpack("C C x6 A8 
A16 A4", $buf);
-
-$res->{removable} = $res->{removable} & 128 ? 1 : 0;
-$res->{type} &= 0x1F;
-
-return $res;
-}
-
-sub path_is_scsi {
-my ($path) = @_;
-
-my $fh = IO::File->new("+<$path") || return;
-my $res = scsi_inquiry($fh, 1);
-close($fh);
-
-return $res;
-}
-
 sub print_tabletdevice_full {
 my ($conf, $arch) = @_;
 
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index e24ba12..dce1398 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -17,6 +17,7 @@ drive_is_cdrom
 drive_is_read_only
 parse_drive
 print_drive
+path_is_scsi
 );
 
 our $QEMU_FORMAT_RE = qr/raw|cow|qcow|qcow2|qed|vmdk|cloop/;
@@ -760,4 +761,64 @@ sub resolve_first_disk {
 return;
 }
 
+sub scsi_inquiry {
+my($fh, $noerr) = @_;
+
+my $SG_IO = 0x2285;
+my $SG_GET_VERSION_NUM = 0x2282;
+
+my $versionbuf = "\x00" x 8;
+my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
+if (!$ret) {
+   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
+   return;
+}
+my $version = unpack("I", $versionbuf);
+if ($version < 3) {
+   die "scsi generic interface too old\n"  if !$noerr;
+   return;
+}
+
+my $buf = "\x00" x 36;
+my $sensebuf = "\x00" x 8;
+my $cmd = pack("C x3 C x1", 0x12, 36);
+
+# see /usr/include/scsi/sg.h
+my $sg_io_hdr_t = "i i C C s I P P P I I i P C C C C S S i I I";
+
+my $packet = pack(
+   $sg_io_hdr_t, ord('S'), -3, length($cmd), length($sensebuf), 0, 
length($buf), $buf, $cmd, $sensebuf, 6000
+);
+
+$ret = ioctl($fh, $SG_IO, $packet);
+if (!$ret) {
+   die "scsi ioctl SG_IO failed - $!\n" if !$noerr;
+   return;
+}
+
+my @res = unpack($sg_io_hdr_t, $packet);
+if ($res[17] || $res[18]) {
+   die "scsi ioctl SG_IO status error - $!\n" if !$noerr;
+   return;
+}
+
+my $res = {};
+$res->@{qw(type removable vendor product revision)} = unpack("C C x6 A8 
A16 A4", $buf);
+
+$res->{removable} = $res->{removable} & 128 ? 1 : 0;
+$res->{type} &= 0x1F;
+
+return $res;
+}
+
+sub path_is_scsi {
+my ($path) = @_;
+
+my $fh = IO::File->new("+<$path") || return;
+my $res = scsi_inquiry($fh, 1);
+close($fh);
+
+return $res;
+}
+
 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 2/4] Move NEW_DISK_RE to QemuServer/Drive.pm

2023-11-17 Thread Hannes Duerr
Move it due to better context and preparation of fix

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 10 --
 PVE/QemuServer/Drive.pm |  1 +
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 38bdaab..b9c8f20 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -86,8 +86,6 @@ my $foreach_volume_with_alloc = sub {
 }
 };
 
-my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
-
 my $check_drive_param = sub {
 my ($param, $storecfg, $extra_checks) = @_;
 
@@ -98,7 +96,7 @@ my $check_drive_param = sub {
raise_param_exc({ $opt => "unable to parse drive options" }) if !$drive;
 
if ($drive->{'import-from'}) {
-   if ($drive->{file} !~ $NEW_DISK_RE || $3 != 0) {
+   if ($drive->{file} !~ $PVE::QemuServer::Drive::NEW_DISK_RE || $3 != 
0) {
raise_param_exc({
$opt => "'import-from' requires special syntax - ".
"use :0,import-from=",
@@ -142,7 +140,7 @@ my $check_storage_access = sub {
# nothing to check
} elsif ($isCDROM && ($volid eq 'cdrom')) {
$rpcenv->check($authuser, "/", ['Sys.Console']);
-   } elsif (!$isCDROM && ($volid =~ $NEW_DISK_RE)) {
+   } elsif (!$isCDROM && ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE)) 
{
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
$rpcenv->check($authuser, "/storage/$storeid", 
['Datastore.AllocateSpace']);
@@ -365,7 +363,7 @@ my $create_disks = sub {
delete $disk->{format}; # no longer needed
$res->{$ds} = PVE::QemuServer::print_drive($disk);
print "$ds: successfully created disk '$res->{$ds}'\n";
-   } elsif ($volid =~ $NEW_DISK_RE) {
+   } elsif ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE) {
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
 
@@ -1626,7 +1624,7 @@ my $update_vm_api  = sub {
return if defined($volname) && $volname eq 'cloudinit';
 
my $format;
-   if ($volid =~ $NEW_DISK_RE) {
+   if ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE) {
$storeid = $2;
$format = $drive->{format} || 
PVE::Storage::storage_default_format($storecfg, $storeid);
} else {
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index dce1398..6d94a2f 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -34,6 +34,7 @@ my $MAX_SCSI_DISKS = 31;
 my $MAX_VIRTIO_DISKS = 16;
 our $MAX_SATA_DISKS = 6;
 our $MAX_UNUSED_DISKS = 256;
+our $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
 
 our $drivedesc_hash;
 # Schema when disk allocation is possible.
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 4/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-17 Thread Hannes Duerr
adds vendor and product information for SCSI devices to the json schema and
checks in the VM create/update API call if it is possible to add these to QEMU 
as a device option

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 39 +++
 PVE/QemuServer.pm   | 13 -
 PVE/QemuServer/Drive.pm | 24 
 3 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b9c8f20..fc8c876 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -696,6 +696,33 @@ my $check_vm_modify_config_perm = sub {
 return 1;
 };
 
+sub assert_scsi_feature_compatibility {
+my ($opt, $conf, $storecfg, $drive_attributes) = @_;
+
+my $drive = PVE::QemuServer::Drive::parse_drive($opt, $drive_attributes);
+
+my $machine_type = PVE::QemuServer::get_vm_machine($conf, undef, 
$conf->{arch});
+my $machine_version = PVE::QemuServer::extract_version(
+   $machine_type, PVE::QemuServer::kvm_user_version());
+my $drivetype = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
+
+if ($drivetype ne 'hd' && $drivetype ne 'cd') {
+   if ($drive->{product}) {
+   raise_param_exc({
+   product => "Passing of product information is only supported 
for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+   if ($drive->{vendor}) {
+   raise_param_exc({
+   vendor => "Passing of vendor information is only supported for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+}
+}
+
 __PACKAGE__->register_method({
 name => 'vmlist',
 path => '',
@@ -1011,6 +1038,13 @@ __PACKAGE__->register_method({
my $conf = $param;
my $arch = PVE::QemuServer::get_vm_arch($conf);
 
+   for my $opt (sort keys $param->%*) {
+   if ($opt =~ m/^scsi(\d)+$/) {
+   assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+   }
+
$conf->{meta} = PVE::QemuServer::new_meta_info_string();
 
my $vollist = [];
@@ -1826,6 +1860,11 @@ my $update_vm_api  = sub {
PVE::QemuServer::vmconfig_register_unused_drive($storecfg, 
$vmid, $conf, PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt}))
if defined($conf->{pending}->{$opt});
 
+   if ($opt =~ m/^scsi(\d)+$/) {
+   PVE::QemuServer::assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+
my (undef, $created_opts) = $create_disks->(
$rpcenv,
$authuser,
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6090f91..4fbb9b2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1210,7 +1210,8 @@ sub kvm_user_version {
 return $kvm_user_version->{$binary};
 
 }
-my sub extract_version {
+
+our sub extract_version {
 my ($machine_type, $version) = @_;
 $version = kvm_user_version() if !defined($version);
 return PVE::QemuServer::Machine::extract_version($machine_type, $version)
@@ -1404,6 +1405,16 @@ sub print_drivedevice_full {
}
$device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
 
+   # only scsi-hd and scsi-cd support passing vendor and product 
information
+   if ($devicetype eq 'hd' || $devicetype eq 'cd') {
+   if (my $vendor = $drive->{vendor}) {
+   $device .= ",vendor=$vendor";
+   }
+   if (my $product = $drive->{product}) {
+   $device .= ",product=$product";
+   }
+   }
+
 } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
my $maxdev = ($drive->{interface} eq 'sata') ? 
$PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
my $controller = int($drive->{index} / $maxdev);
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index de62d43..4e1646d 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -161,6 +161,26 @@ my %iothread_fmt = ( iothread => {
optional => 1,
 });
 
+my %product_fmt = (
+product => {
+   type => 'string',
+   pattern => '[A-Za-z0-9\-_]{,40}',
+   format_description => 'product',
+   description => "The drive's product name, up to 40 bytes long.",
+   optional => 1,
+},
+);
+
+my %vendor_fmt = (
+

[pve-devel] [PATCH v5 qemu-server 0/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-17 Thread Hannes Duerr
changes in v2:
- when calling the API to create/update a VM, check whether the devices
are "scsi-hd" or "scsi-cd" devices,where there is the option to add
vendor and product information, if not error out
- change the format in product_fmt and vendor_fmt to a pattern that only
allows 40 characters consisting of upper and lower case letters, numbers and 
'-' and '_'.

changes in v3:
- splitup into preparation and fix patch
- move get_scsi_devicetype into QemuServer/Drive.pm
- refactor check_scsi_feature_compatibility to assert_scsi_feature_compatibility
- assert_scsi_feature_compatibility before creating the device
- handle 'local-lvm:' syntax in get_scsi_devicetype
- fix style issues

changes in v4:
- create assert_scsi_feature_compatibility() in API2/Qemu.pm
- divide the preparation into smaller steps
- remove or harden brittle regex
- fix wrong storagename assumption

changes in v5:
- fix copy/paste mistake

Hannes Duerr (4):
  Move path_is_scsi to QemuServer/Drive.pm
  Move NEW_DISK_RE to QemuServer/Drive.pm
  drive: Create get_scsi_devicetype
  fix #4957: add vendor and product information passthrough for
SCSI-Disks

 PVE/API2/Qemu.pm|  49 +++--
 PVE/QemuServer.pm   | 100 +
 PVE/QemuServer/Drive.pm | 119 
 3 files changed, 177 insertions(+), 91 deletions(-)

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v5 qemu-server 1/4] Move path_is_scsi to QemuServer/Drive.pm

2023-11-17 Thread Hannes Duerr
Prepare for introduction of new helper

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 62 +
 PVE/QemuServer/Drive.pm | 61 
 2 files changed, 62 insertions(+), 61 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c465fb6..294702d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -53,7 +53,7 @@ use PVE::QemuServer::Helpers qw(config_aware_timeout 
min_version windows_version
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
-use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive);
+use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive path_is_scsi);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory qw(get_current_memory);
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -1342,66 +1342,6 @@ sub pve_verify_hotplug_features {
 die "unable to parse hotplug option\n";
 }
 
-sub scsi_inquiry {
-my($fh, $noerr) = @_;
-
-my $SG_IO = 0x2285;
-my $SG_GET_VERSION_NUM = 0x2282;
-
-my $versionbuf = "\x00" x 8;
-my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
-if (!$ret) {
-   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
-   return;
-}
-my $version = unpack("I", $versionbuf);
-if ($version < 3) {
-   die "scsi generic interface too old\n"  if !$noerr;
-   return;
-}
-
-my $buf = "\x00" x 36;
-my $sensebuf = "\x00" x 8;
-my $cmd = pack("C x3 C x1", 0x12, 36);
-
-# see /usr/include/scsi/sg.h
-my $sg_io_hdr_t = "i i C C s I P P P I I i P C C C C S S i I I";
-
-my $packet = pack(
-   $sg_io_hdr_t, ord('S'), -3, length($cmd), length($sensebuf), 0, 
length($buf), $buf, $cmd, $sensebuf, 6000
-);
-
-$ret = ioctl($fh, $SG_IO, $packet);
-if (!$ret) {
-   die "scsi ioctl SG_IO failed - $!\n" if !$noerr;
-   return;
-}
-
-my @res = unpack($sg_io_hdr_t, $packet);
-if ($res[17] || $res[18]) {
-   die "scsi ioctl SG_IO status error - $!\n" if !$noerr;
-   return;
-}
-
-my $res = {};
-$res->@{qw(type removable vendor product revision)} = unpack("C C x6 A8 
A16 A4", $buf);
-
-$res->{removable} = $res->{removable} & 128 ? 1 : 0;
-$res->{type} &= 0x1F;
-
-return $res;
-}
-
-sub path_is_scsi {
-my ($path) = @_;
-
-my $fh = IO::File->new("+<$path") || return;
-my $res = scsi_inquiry($fh, 1);
-close($fh);
-
-return $res;
-}
-
 sub print_tabletdevice_full {
 my ($conf, $arch) = @_;
 
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index e24ba12..dce1398 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -17,6 +17,7 @@ drive_is_cdrom
 drive_is_read_only
 parse_drive
 print_drive
+path_is_scsi
 );
 
 our $QEMU_FORMAT_RE = qr/raw|cow|qcow|qcow2|qed|vmdk|cloop/;
@@ -760,4 +761,64 @@ sub resolve_first_disk {
 return;
 }
 
+sub scsi_inquiry {
+my($fh, $noerr) = @_;
+
+my $SG_IO = 0x2285;
+my $SG_GET_VERSION_NUM = 0x2282;
+
+my $versionbuf = "\x00" x 8;
+my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
+if (!$ret) {
+   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
+   return;
+}
+my $version = unpack("I", $versionbuf);
+if ($version < 3) {
+   die "scsi generic interface too old\n"  if !$noerr;
+   return;
+}
+
+my $buf = "\x00" x 36;
+my $sensebuf = "\x00" x 8;
+my $cmd = pack("C x3 C x1", 0x12, 36);
+
+# see /usr/include/scsi/sg.h
+my $sg_io_hdr_t = "i i C C s I P P P I I i P C C C C S S i I I";
+
+my $packet = pack(
+   $sg_io_hdr_t, ord('S'), -3, length($cmd), length($sensebuf), 0, 
length($buf), $buf, $cmd, $sensebuf, 6000
+);
+
+$ret = ioctl($fh, $SG_IO, $packet);
+if (!$ret) {
+   die "scsi ioctl SG_IO failed - $!\n" if !$noerr;
+   return;
+}
+
+my @res = unpack($sg_io_hdr_t, $packet);
+if ($res[17] || $res[18]) {
+   die "scsi ioctl SG_IO status error - $!\n" if !$noerr;
+   return;
+}
+
+my $res = {};
+$res->@{qw(type removable vendor product revision)} = unpack("C C x6 A8 
A16 A4", $buf);
+
+$res->{removable} = $res->{removable} & 128 ? 1 : 0;
+$res->{type} &= 0x1F;
+
+return $res;
+}
+
+sub path_is_scsi {
+my ($path) = @_;
+
+my $fh = IO::File->new("+<$path") || return;
+my $res = scsi_inquiry($fh, 1);
+close($fh);
+
+return $res;
+}
+
 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v5 qemu-server 3/4] drive: Create get_scsi_devicetype

2023-11-17 Thread Hannes Duerr
Encapsulation of the functionality for determining the scsi device type in a 
new function
for reusability in QemuServer/Drive.pm

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 29 -
 PVE/QemuServer/Drive.pm | 35 ++-
 2 files changed, 38 insertions(+), 26 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 294702d..6090f91 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -53,7 +53,7 @@ use PVE::QemuServer::Helpers qw(config_aware_timeout 
min_version windows_version
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
-use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive path_is_scsi);
+use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory qw(get_current_memory);
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -1386,31 +1386,10 @@ sub print_drivedevice_full {
 
my ($maxdev, $controller, $controller_prefix) = scsihw_infos($conf, 
$drive);
my $unit = $drive->{index} % $maxdev;
-   my $devicetype = 'hd';
-   my $path = '';
-   if (drive_is_cdrom($drive)) {
-   $devicetype = 'cd';
-   } else {
-   if ($drive->{file} =~ m|^/|) {
-   $path = $drive->{file};
-   if (my $info = path_is_scsi($path)) {
-   if ($info->{type} == 0 && $drive->{scsiblock}) {
-   $devicetype = 'block';
-   } elsif ($info->{type} == 1) { # tape
-   $devicetype = 'generic';
-   }
-   }
-   } else {
-$path = PVE::Storage::path($storecfg, $drive->{file});
-   }
 
-   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
-   my $version = extract_version($machine_type, kvm_user_version());
-   if ($path =~ m/^iscsi\:\/\// &&
-  !min_version($version, 4, 1)) {
-   $devicetype = 'generic';
-   }
-   }
+   my $machine_version = extract_version($machine_type, 
kvm_user_version());
+   my $devicetype  = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
 
if (!$conf->{scsihw} || $conf->{scsihw} =~ m/^lsi/ || $conf->{scsihw} 
eq 'pvscsi') {
$device = 
"scsi-$devicetype,bus=$controller_prefix$controller.0,scsi-id=$unit";
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 6d94a2f..de62d43 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -15,9 +15,9 @@ is_valid_drivename
 drive_is_cloudinit
 drive_is_cdrom
 drive_is_read_only
+get_scsi_devicetype
 parse_drive
 print_drive
-path_is_scsi
 );
 
 our $QEMU_FORMAT_RE = qr/raw|cow|qcow|qcow2|qed|vmdk|cloop/;
@@ -822,4 +822,37 @@ sub path_is_scsi {
 return $res;
 }
 
+sub get_scsi_devicetype {
+my ($drive, $storecfg, $machine_version) = @_;
+
+my $devicetype = 'hd';
+my $path = '';
+if (drive_is_cdrom($drive)) {
+   $devicetype = 'cd';
+} else {
+   if ($drive->{file} =~ m|^/|) {
+   $path = $drive->{file};
+   if (my $info = path_is_scsi($path)) {
+   if ($info->{type} == 0 && $drive->{scsiblock}) {
+   $devicetype = 'block';
+   } elsif ($info->{type} == 1) { # tape
+   $devicetype = 'generic';
+   }
+   }
+   } elsif ($drive->{file} =~ $NEW_DISK_RE){
+   # special syntax cannot be parsed to path
+   return $devicetype;
+   } else {
+   $path = PVE::Storage::path($storecfg, $drive->{file});
+   }
+
+   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
+   if ($path =~ m/^iscsi\:\/\// &&
+  !min_version($machine_version, 4, 1)) {
+   $devicetype = 'generic';
+   }
+}
+
+return $devicetype;
+}
 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v5 qemu-server 2/4] Move NEW_DISK_RE to QemuServer/Drive.pm

2023-11-17 Thread Hannes Duerr
Move it due to better context and preparation of fix

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 10 --
 PVE/QemuServer/Drive.pm |  1 +
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 38bdaab..b9c8f20 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -86,8 +86,6 @@ my $foreach_volume_with_alloc = sub {
 }
 };
 
-my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
-
 my $check_drive_param = sub {
 my ($param, $storecfg, $extra_checks) = @_;
 
@@ -98,7 +96,7 @@ my $check_drive_param = sub {
raise_param_exc({ $opt => "unable to parse drive options" }) if !$drive;
 
if ($drive->{'import-from'}) {
-   if ($drive->{file} !~ $NEW_DISK_RE || $3 != 0) {
+   if ($drive->{file} !~ $PVE::QemuServer::Drive::NEW_DISK_RE || $3 != 
0) {
raise_param_exc({
$opt => "'import-from' requires special syntax - ".
"use :0,import-from=",
@@ -142,7 +140,7 @@ my $check_storage_access = sub {
# nothing to check
} elsif ($isCDROM && ($volid eq 'cdrom')) {
$rpcenv->check($authuser, "/", ['Sys.Console']);
-   } elsif (!$isCDROM && ($volid =~ $NEW_DISK_RE)) {
+   } elsif (!$isCDROM && ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE)) 
{
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
$rpcenv->check($authuser, "/storage/$storeid", 
['Datastore.AllocateSpace']);
@@ -365,7 +363,7 @@ my $create_disks = sub {
delete $disk->{format}; # no longer needed
$res->{$ds} = PVE::QemuServer::print_drive($disk);
print "$ds: successfully created disk '$res->{$ds}'\n";
-   } elsif ($volid =~ $NEW_DISK_RE) {
+   } elsif ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE) {
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
 
@@ -1626,7 +1624,7 @@ my $update_vm_api  = sub {
return if defined($volname) && $volname eq 'cloudinit';
 
my $format;
-   if ($volid =~ $NEW_DISK_RE) {
+   if ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE) {
$storeid = $2;
$format = $drive->{format} || 
PVE::Storage::storage_default_format($storecfg, $storeid);
} else {
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index dce1398..6d94a2f 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -34,6 +34,7 @@ my $MAX_SCSI_DISKS = 31;
 my $MAX_VIRTIO_DISKS = 16;
 our $MAX_SATA_DISKS = 6;
 our $MAX_UNUSED_DISKS = 256;
+our $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
 
 our $drivedesc_hash;
 # Schema when disk allocation is possible.
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v5 qemu-server 4/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-17 Thread Hannes Duerr
adds vendor and product information for SCSI devices to the json schema and
checks in the VM create/update API call if it is possible to add these to QEMU 
as a device option

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 39 +++
 PVE/QemuServer.pm   | 13 -
 PVE/QemuServer/Drive.pm | 24 
 3 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b9c8f20..75c7161 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -696,6 +696,33 @@ my $check_vm_modify_config_perm = sub {
 return 1;
 };
 
+sub assert_scsi_feature_compatibility {
+my ($opt, $conf, $storecfg, $drive_attributes) = @_;
+
+my $drive = PVE::QemuServer::Drive::parse_drive($opt, $drive_attributes);
+
+my $machine_type = PVE::QemuServer::get_vm_machine($conf, undef, 
$conf->{arch});
+my $machine_version = PVE::QemuServer::extract_version(
+   $machine_type, PVE::QemuServer::kvm_user_version());
+my $drivetype = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
+
+if ($drivetype ne 'hd' && $drivetype ne 'cd') {
+   if ($drive->{product}) {
+   raise_param_exc({
+   product => "Passing of product information is only supported 
for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+   if ($drive->{vendor}) {
+   raise_param_exc({
+   vendor => "Passing of vendor information is only supported for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+}
+}
+
 __PACKAGE__->register_method({
 name => 'vmlist',
 path => '',
@@ -1011,6 +1038,13 @@ __PACKAGE__->register_method({
my $conf = $param;
my $arch = PVE::QemuServer::get_vm_arch($conf);
 
+   for my $opt (sort keys $param->%*) {
+   if ($opt =~ m/^scsi(\d)+$/) {
+   assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+   }
+
$conf->{meta} = PVE::QemuServer::new_meta_info_string();
 
my $vollist = [];
@@ -1826,6 +1860,11 @@ my $update_vm_api  = sub {
PVE::QemuServer::vmconfig_register_unused_drive($storecfg, 
$vmid, $conf, PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt}))
if defined($conf->{pending}->{$opt});
 
+   if ($opt =~ m/^scsi(\d)+$/) {
+   assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+
my (undef, $created_opts) = $create_disks->(
$rpcenv,
$authuser,
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6090f91..4fbb9b2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1210,7 +1210,8 @@ sub kvm_user_version {
 return $kvm_user_version->{$binary};
 
 }
-my sub extract_version {
+
+our sub extract_version {
 my ($machine_type, $version) = @_;
 $version = kvm_user_version() if !defined($version);
 return PVE::QemuServer::Machine::extract_version($machine_type, $version)
@@ -1404,6 +1405,16 @@ sub print_drivedevice_full {
}
$device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
 
+   # only scsi-hd and scsi-cd support passing vendor and product 
information
+   if ($devicetype eq 'hd' || $devicetype eq 'cd') {
+   if (my $vendor = $drive->{vendor}) {
+   $device .= ",vendor=$vendor";
+   }
+   if (my $product = $drive->{product}) {
+   $device .= ",product=$product";
+   }
+   }
+
 } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
my $maxdev = ($drive->{interface} eq 'sata') ? 
$PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
my $controller = int($drive->{index} / $maxdev);
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index de62d43..4e1646d 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -161,6 +161,26 @@ my %iothread_fmt = ( iothread => {
optional => 1,
 });
 
+my %product_fmt = (
+product => {
+   type => 'string',
+   pattern => '[A-Za-z0-9\-_]{,40}',
+   format_description => 'product',
+   description => "The drive's product name, up to 40 bytes long.",
+   optional => 1,
+},
+);
+
+my %vendor_fmt = (
+vendor => {
+   

[pve-devel] [PATCH v6 qemu-server 1/4] Move path_is_scsi to QemuServer/Drive.pm

2023-12-05 Thread Hannes Duerr
Prepare for introduction of new helper

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 62 +---
 PVE/QemuServer/Drive.pm | 63 +
 2 files changed, 64 insertions(+), 61 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2063e66..7e69924 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -53,7 +53,7 @@ use PVE::QemuServer::Helpers qw(config_aware_timeout 
min_version windows_version
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
-use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive);
+use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive path_is_scsi);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory qw(get_current_memory);
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -1365,66 +1365,6 @@ sub assert_clipboard_config {
 }
 }
 
-sub scsi_inquiry {
-my($fh, $noerr) = @_;
-
-my $SG_IO = 0x2285;
-my $SG_GET_VERSION_NUM = 0x2282;
-
-my $versionbuf = "\x00" x 8;
-my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
-if (!$ret) {
-   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
-   return;
-}
-my $version = unpack("I", $versionbuf);
-if ($version < 3) {
-   die "scsi generic interface too old\n"  if !$noerr;
-   return;
-}
-
-my $buf = "\x00" x 36;
-my $sensebuf = "\x00" x 8;
-my $cmd = pack("C x3 C x1", 0x12, 36);
-
-# see /usr/include/scsi/sg.h
-my $sg_io_hdr_t = "i i C C s I P P P I I i P C C C C S S i I I";
-
-my $packet = pack(
-   $sg_io_hdr_t, ord('S'), -3, length($cmd), length($sensebuf), 0, 
length($buf), $buf, $cmd, $sensebuf, 6000
-);
-
-$ret = ioctl($fh, $SG_IO, $packet);
-if (!$ret) {
-   die "scsi ioctl SG_IO failed - $!\n" if !$noerr;
-   return;
-}
-
-my @res = unpack($sg_io_hdr_t, $packet);
-if ($res[17] || $res[18]) {
-   die "scsi ioctl SG_IO status error - $!\n" if !$noerr;
-   return;
-}
-
-my $res = {};
-$res->@{qw(type removable vendor product revision)} = unpack("C C x6 A8 
A16 A4", $buf);
-
-$res->{removable} = $res->{removable} & 128 ? 1 : 0;
-$res->{type} &= 0x1F;
-
-return $res;
-}
-
-sub path_is_scsi {
-my ($path) = @_;
-
-my $fh = IO::File->new("+<$path") || return;
-my $res = scsi_inquiry($fh, 1);
-close($fh);
-
-return $res;
-}
-
 sub print_tabletdevice_full {
 my ($conf, $arch) = @_;
 
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index e24ba12..f3fbaaa 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -5,6 +5,8 @@ use warnings;
 
 use Storable qw(dclone);
 
+use IO::File;
+
 use PVE::Storage;
 use PVE::JSONSchema qw(get_standard_option);
 
@@ -17,6 +19,7 @@ drive_is_cdrom
 drive_is_read_only
 parse_drive
 print_drive
+path_is_scsi
 );
 
 our $QEMU_FORMAT_RE = qr/raw|cow|qcow|qcow2|qed|vmdk|cloop/;
@@ -760,4 +763,64 @@ sub resolve_first_disk {
 return;
 }
 
+sub scsi_inquiry {
+my($fh, $noerr) = @_;
+
+my $SG_IO = 0x2285;
+my $SG_GET_VERSION_NUM = 0x2282;
+
+my $versionbuf = "\x00" x 8;
+my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
+if (!$ret) {
+   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
+   return;
+}
+my $version = unpack("I", $versionbuf);
+if ($version < 3) {
+   die "scsi generic interface too old\n"  if !$noerr;
+   return;
+}
+
+my $buf = "\x00" x 36;
+my $sensebuf = "\x00" x 8;
+my $cmd = pack("C x3 C x1", 0x12, 36);
+
+# see /usr/include/scsi/sg.h
+my $sg_io_hdr_t = "i i C C s I P P P I I i P C C C C S S i I I";
+
+my $packet = pack(
+   $sg_io_hdr_t, ord('S'), -3, length($cmd), length($sensebuf), 0, 
length($buf), $buf, $cmd, $sensebuf, 6000
+);
+
+$ret = ioctl($fh, $SG_IO, $packet);
+if (!$ret) {
+   die "scsi ioctl SG_IO failed - $!\n" if !$noerr;
+   return;
+}
+
+my @res = unpack($sg_io_hdr_t, $packet);
+if ($res[17] || $res[18]) {
+   die "scsi ioctl SG_IO status error - $!\n" if !$noerr;
+   return;
+}
+
+my $res = {};
+$res->@{qw(type removable vendor product revision)} = unpack("C C x6 A8 
A16 A4", $buf);
+
+$res->{removable} = $res->{removable} & 128 ? 1 : 0;
+$res->{type} &= 0x1F;
+
+return $res;
+}
+
+sub path_is_scsi {
+my ($path) = @_;
+
+my $fh = IO::File->new("+<$path") || return;
+my $res = scsi_inquiry($fh, 1);
+close($fh);
+
+return $res;
+}
+
 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v6 qemu-server 2/4] Move NEW_DISK_RE to QemuServer/Drive.pm

2023-12-05 Thread Hannes Duerr
Prepare for introduction of new helper

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 10 --
 PVE/QemuServer/Drive.pm |  1 +
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index f26adf5..9e3cfb5 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -86,8 +86,6 @@ my $foreach_volume_with_alloc = sub {
 }
 };
 
-my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
-
 my $check_drive_param = sub {
 my ($param, $storecfg, $extra_checks) = @_;
 
@@ -98,7 +96,7 @@ my $check_drive_param = sub {
raise_param_exc({ $opt => "unable to parse drive options" }) if !$drive;
 
if ($drive->{'import-from'}) {
-   if ($drive->{file} !~ $NEW_DISK_RE || $3 != 0) {
+   if ($drive->{file} !~ $PVE::QemuServer::Drive::NEW_DISK_RE || $3 != 
0) {
raise_param_exc({
$opt => "'import-from' requires special syntax - ".
"use :0,import-from=",
@@ -142,7 +140,7 @@ my $check_storage_access = sub {
# nothing to check
} elsif ($isCDROM && ($volid eq 'cdrom')) {
$rpcenv->check($authuser, "/", ['Sys.Console']);
-   } elsif (!$isCDROM && ($volid =~ $NEW_DISK_RE)) {
+   } elsif (!$isCDROM && ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE)) 
{
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
$rpcenv->check($authuser, "/storage/$storeid", 
['Datastore.AllocateSpace']);
@@ -365,7 +363,7 @@ my $create_disks = sub {
delete $disk->{format}; # no longer needed
$res->{$ds} = PVE::QemuServer::print_drive($disk);
print "$ds: successfully created disk '$res->{$ds}'\n";
-   } elsif ($volid =~ $NEW_DISK_RE) {
+   } elsif ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE) {
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
 
@@ -1633,7 +1631,7 @@ my $update_vm_api  = sub {
return if defined($volname) && $volname eq 'cloudinit';
 
my $format;
-   if ($volid =~ $NEW_DISK_RE) {
+   if ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE) {
$storeid = $2;
$format = $drive->{format} || 
PVE::Storage::storage_default_format($storecfg, $storeid);
} else {
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index f3fbaaa..3a27a6e 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -36,6 +36,7 @@ my $MAX_SCSI_DISKS = 31;
 my $MAX_VIRTIO_DISKS = 16;
 our $MAX_SATA_DISKS = 6;
 our $MAX_UNUSED_DISKS = 256;
+our $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
 
 our $drivedesc_hash;
 # Schema when disk allocation is possible.
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v6 qemu-server 3/4] drive: Create get_scsi_devicetype

2023-12-05 Thread Hannes Duerr
Encapsulation of the functionality for determining the scsi device type
in a new function for reusability in QemuServer/Drive.pm

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 29 -
 PVE/QemuServer/Drive.pm | 35 ++-
 2 files changed, 38 insertions(+), 26 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7e69924..b3e651e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -53,7 +53,7 @@ use PVE::QemuServer::Helpers qw(config_aware_timeout 
min_version windows_version
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
-use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive path_is_scsi);
+use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory qw(get_current_memory);
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -1409,31 +1409,10 @@ sub print_drivedevice_full {
 
my ($maxdev, $controller, $controller_prefix) = scsihw_infos($conf, 
$drive);
my $unit = $drive->{index} % $maxdev;
-   my $devicetype = 'hd';
-   my $path = '';
-   if (drive_is_cdrom($drive)) {
-   $devicetype = 'cd';
-   } else {
-   if ($drive->{file} =~ m|^/|) {
-   $path = $drive->{file};
-   if (my $info = path_is_scsi($path)) {
-   if ($info->{type} == 0 && $drive->{scsiblock}) {
-   $devicetype = 'block';
-   } elsif ($info->{type} == 1) { # tape
-   $devicetype = 'generic';
-   }
-   }
-   } else {
-$path = PVE::Storage::path($storecfg, $drive->{file});
-   }
 
-   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
-   my $version = extract_version($machine_type, kvm_user_version());
-   if ($path =~ m/^iscsi\:\/\// &&
-  !min_version($version, 4, 1)) {
-   $devicetype = 'generic';
-   }
-   }
+   my $machine_version = extract_version($machine_type, 
kvm_user_version());
+   my $devicetype  = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
 
if (!$conf->{scsihw} || $conf->{scsihw} =~ m/^lsi/ || $conf->{scsihw} 
eq 'pvscsi') {
$device = 
"scsi-$devicetype,bus=$controller_prefix$controller.0,scsi-id=$unit";
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 3a27a6e..5747356 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -17,9 +17,9 @@ is_valid_drivename
 drive_is_cloudinit
 drive_is_cdrom
 drive_is_read_only
+get_scsi_devicetype
 parse_drive
 print_drive
-path_is_scsi
 );
 
 our $QEMU_FORMAT_RE = qr/raw|cow|qcow|qcow2|qed|vmdk|cloop/;
@@ -824,4 +824,37 @@ sub path_is_scsi {
 return $res;
 }
 
+sub get_scsi_devicetype {
+my ($drive, $storecfg, $machine_version) = @_;
+
+my $devicetype = 'hd';
+my $path = '';
+if (drive_is_cdrom($drive)) {
+   $devicetype = 'cd';
+} else {
+   if ($drive->{file} =~ m|^/|) {
+   $path = $drive->{file};
+   if (my $info = path_is_scsi($path)) {
+   if ($info->{type} == 0 && $drive->{scsiblock}) {
+   $devicetype = 'block';
+   } elsif ($info->{type} == 1) { # tape
+   $devicetype = 'generic';
+   }
+   }
+   } elsif ($drive->{file} =~ $NEW_DISK_RE){
+   # special syntax cannot be parsed to path
+   return $devicetype;
+   } else {
+   $path = PVE::Storage::path($storecfg, $drive->{file});
+   }
+
+   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
+   if ($path =~ m/^iscsi\:\/\// &&
+   !PVE::QemuServer::Helpers::min_version($machine_version, 4, 1)) {
+   $devicetype = 'generic';
+   }
+}
+
+return $devicetype;
+}
 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v6 qemu-server 4/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-12-05 Thread Hannes Duerr
adds vendor and product information for SCSI devices to the json schema
and checks in the VM create/update API call if it is possible to add
these to QEMU as a device option

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 38 ++
 PVE/QemuServer.pm   | 13 -
 PVE/QemuServer/Drive.pm | 24 
 3 files changed, 74 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9e3cfb5..e0fbb3c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -696,6 +696,33 @@ my $check_vm_modify_config_perm = sub {
 return 1;
 };
 
+sub assert_scsi_feature_compatibility {
+my ($opt, $conf, $storecfg, $drive_attributes) = @_;
+
+my $drive = PVE::QemuServer::Drive::parse_drive($opt, $drive_attributes);
+
+my $machine_type = PVE::QemuServer::get_vm_machine($conf, undef, 
$conf->{arch});
+my $machine_version = PVE::QemuServer::extract_version(
+   $machine_type, PVE::QemuServer::kvm_user_version());
+my $drivetype = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
+
+if ($drivetype ne 'hd' && $drivetype ne 'cd') {
+   if ($drive->{product}) {
+   raise_param_exc({
+   product => "Passing of product information is only supported 
for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+   if ($drive->{vendor}) {
+   raise_param_exc({
+   vendor => "Passing of vendor information is only supported for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+}
+}
+
 __PACKAGE__->register_method({
 name => 'vmlist',
 path => '',
@@ -1013,6 +1040,12 @@ __PACKAGE__->register_method({
my $conf = $param;
my $arch = PVE::QemuServer::get_vm_arch($conf);
 
+   for my $opt (sort keys $param->%*) {
+   next if $opt !~ m/^scsi\d+$/;
+   assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+
$conf->{meta} = PVE::QemuServer::new_meta_info_string();
 
my $vollist = [];
@@ -1833,6 +1866,11 @@ my $update_vm_api  = sub {
PVE::QemuServer::vmconfig_register_unused_drive($storecfg, 
$vmid, $conf, PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt}))
if defined($conf->{pending}->{$opt});
 
+   if ($opt =~ m/^scsi\d+$/) {
+   assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+
my (undef, $created_opts) = $create_disks->(
$rpcenv,
$authuser,
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b3e651e..3a4c30d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1218,7 +1218,8 @@ sub kvm_user_version {
 return $kvm_user_version->{$binary};
 
 }
-my sub extract_version {
+
+our sub extract_version {
 my ($machine_type, $version) = @_;
 $version = kvm_user_version() if !defined($version);
 return PVE::QemuServer::Machine::extract_version($machine_type, $version)
@@ -1427,6 +1428,16 @@ sub print_drivedevice_full {
}
$device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
 
+   # only scsi-hd and scsi-cd support passing vendor and product 
information
+   if ($devicetype eq 'hd' || $devicetype eq 'cd') {
+   if (my $vendor = $drive->{vendor}) {
+   $device .= ",vendor=$vendor";
+   }
+   if (my $product = $drive->{product}) {
+   $device .= ",product=$product";
+   }
+   }
+
 } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
my $maxdev = ($drive->{interface} eq 'sata') ? 
$PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
my $controller = int($drive->{index} / $maxdev);
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 5747356..82d117e 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -163,6 +163,26 @@ my %iothread_fmt = ( iothread => {
optional => 1,
 });
 
+my %product_fmt = (
+product => {
+   type => 'string',
+   pattern => '[A-Za-z0-9\-_\s]{,40}',
+   format_description => 'product',
+   description => "The drive's product name, up to 40 bytes long.",
+   optional => 1,
+},
+);
+
+my %vendor_fmt = (
+vendor => {
+   type => &#x

[pve-devel] [PATCH v6 qemu-server 0/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-12-05 Thread Hannes Duerr
changes in v3:
- splitup into preparation and fix patch
- move get_scsi_devicetype into QemuServer/Drive.pm
- refactor check_scsi_feature_compatibility to
  assert_scsi_feature_compatibility
- assert_scsi_feature_compatibility before creating the device
- handle 'local-lvm:' syntax in get_scsi_devicetype
- fix style issues

changes in v4:
- create assert_scsi_feature_compatibility() in API2/Qemu.pm
- divide the preparation into smaller steps
- remove or harden brittle regex
- fix wrong storagename assumption

changes in v5:
- fix copy/paste mistake

changes in v6:
- add whitespace to allowed characters for vendor and product
  information
- fix undefined subroutine errors
- fix nits

Hannes Duerr (4):
  Move path_is_scsi to QemuServer/Drive.pm
  Move NEW_DISK_RE to QemuServer/Drive.pm
  drive: Create get_scsi_devicetype
  fix #4957: add vendor and product information passthrough for
SCSI-Disks

 PVE/API2/Qemu.pm|  48 ++--
 PVE/QemuServer.pm   | 100 +
 PVE/QemuServer/Drive.pm | 121 
 3 files changed, 178 insertions(+), 91 deletions(-)

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 qemu-server pve-storage 0/2] fix #1611: implement import of base-images for LVM-thin Storage

2023-12-07 Thread Hannes Duerr


if a base-image is to be migrated to a lvm-thin storage, a new
vm-image is allocated on the target side, then the data is written
and afterwards the image is converted to a base-image

qemu-server:

Hannes Duerr (1):
  migration: secure and use source volume names for cleanup

 PVE/QemuMigrate.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)


pve-storage:

Hannes Duerr (1):
  fix #1611: implement import of base-images for LVM-thin Storage

 src/PVE/Storage/LvmThinPlugin.pm | 60 
 1 file changed, 60 insertions(+)


Summary over all repositories:
  2 files changed, 63 insertions(+), 2 deletions(-)

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 qemu-server pve-storage 1/2] migration: secure and use source volume names for cleanup

2023-12-07 Thread Hannes Duerr
During migration, the volume names may change if the name is already in
use at the target location. We therefore want to save the original names
before the migration so that we can clean up the original volumes
afterwards.

Signed-off-by: Hannes Duerr 
---
 PVE/QemuMigrate.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index b87e47a..6c9e762 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -632,6 +632,7 @@ sub sync_offline_local_volumes {
 
 my $local_volumes = $self->{local_volumes};
 my @volids = $self->filter_local_volumes('offline', 0);
+$self->{source_volumes} = \@volids;
 
 my $storecfg = $self->{storecfg};
 my $opts = $self->{opts};
@@ -1584,10 +1585,10 @@ sub phase3_cleanup {
$self->{errors} = 1;
 }
 
+
 # always deactivate volumes - avoid lvm LVs to be active on several nodes
 eval {
-   my $vollist = PVE::QemuServer::get_vm_volumes($conf);
-   PVE::Storage::deactivate_volumes($self->{storecfg}, $vollist);
+   PVE::Storage::deactivate_volumes($self->{storecfg}, 
$self->{source_volumes});
 };
 if (my $err = $@) {
$self->log('err', $err);
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 qemu-server pve-storage 2/2] fix #1611: implement import of base-images for LVM-thin Storage

2023-12-07 Thread Hannes Duerr
for base images we call the volume_import of the parent plugin and pass
it as vm-image instead of base-image, then convert it back as base-image

Signed-off-by: Hannes Duerr 
---
 src/PVE/Storage/LvmThinPlugin.pm | 60 
 1 file changed, 60 insertions(+)

diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
index 1d2e37c..6c95919 100644
--- a/src/PVE/Storage/LvmThinPlugin.pm
+++ b/src/PVE/Storage/LvmThinPlugin.pm
@@ -383,6 +383,66 @@ sub volume_has_feature {
 return undef;
 }
 
+sub volume_import {
+my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, 
$base_snapshot, $with_snapshots, $allow_rename) = @_;
+die "volume import format $format not available for $class\n"
+   if $format ne 'raw+size';
+die "cannot import volumes together with their snapshots in $class\n"
+   if $with_snapshots;
+die "cannot import an incremental stream in $class\n" if 
defined($base_snapshot);
+
+my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
+   $class->parse_volname($volname);
+die "cannot import format $format into a file of format $file_format\n"
+   if $file_format ne 'raw';
+
+my $oldbasename;
+if (!$isBase) {
+   ($storeid, $volname) =  split (/:/, $class->SUPER::volume_import(
+   $scfg,
+   $storeid,
+   $fh,
+   $name,
+   $format,
+   $snapshot,
+   $base_snapshot,
+   $with_snapshots,
+   $allow_rename
+   ));
+} else {
+   my $vg = $scfg->{vgname};
+   my $lvs = PVE::Storage::LVMPlugin::lvm_list_volumes($vg);
+   if ($lvs->{$vg}->{$volname}) {
+   die "volume $vg/$volname already exists\n" if !$allow_rename;
+   warn "volume $vg/$volname already exists - importing with a 
different name\n";
+
+   $volname = $class->find_free_diskname($storeid, $scfg, $vmid);
+   } else {
+   $oldbasename = $volname;
+   $volname =~ s/base/vm/;
+   }
+
+   ($storeid, $volname) =  split (/:/, $class->SUPER::volume_import(
+   $scfg,
+   $storeid,
+   $fh,
+   $volname,
+   $format,
+   $snapshot,
+   $base_snapshot,
+   $with_snapshots,
+   $allow_rename
+   ));
+
+   $volname = $class->create_base($storeid, $scfg, $volname);
+   if ($oldbasename) {
+   $volname= $oldbasename;
+   }
+}
+
+return "$storeid:$volname";
+}
+
 # used in LVMPlugin->volume_import
 sub volume_import_write {
 my ($class, $input_fh, $output_file) = @_;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-docs] firewall: fix link to suricata page

2023-12-14 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 pve-firewall.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pve-firewall.adoc b/pve-firewall.adoc
index 836a51c..a5e40f9 100644
--- a/pve-firewall.adoc
+++ b/pve-firewall.adoc
@@ -562,7 +562,7 @@ and add `ip_conntrack_ftp` to `/etc/modules` (so that it 
works after a reboot).
 Suricata IPS integration
 
 
-If you want to use the https://suricata-ids.org/[Suricata IPS]
+If you want to use the https://suricata.io/[Suricata IPS]
 (Intrusion Prevention System), it's possible.
 
 Packets will be forwarded to the IPS only after the firewall ACCEPTed
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server/storage v3 0/2] fix #1611: implement import of base-images for LVM-thin Storage

2023-12-19 Thread Hannes Duerr
Changes in V2:
* restructure and remove duplication
* fix deactivation of volumes after migration


Changes in V3:
* fix nits
* remove unnecessary oldname override
* deactivate not only offline volumes, but all of them

qemu-server:

Hannes Duerr (1):
  migration: secure and use source volume names for deactivation

 PVE/QemuMigrate.pm | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)


pve-storage:

Hannes Duerr (1):
  fix #1611: implement import of base-images for LVM-thin Storage

 src/PVE/Storage/LvmThinPlugin.pm | 51 
 1 file changed, 51 insertions(+)


Summary over all repositories:
  2 files changed, 57 insertions(+), 0 deletions(-)

-- 
Generated by git-murpp 0.5.0


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-storage v3 2/2] fix #1611: implement import of base-images for LVM-thin Storage

2023-12-19 Thread Hannes Duerr
for base images we call the volume_import of the parent plugin and pass
it as vm-image instead of base-image, then convert it back as base-image


Signed-off-by: Hannes Duerr 
---
 src/PVE/Storage/LvmThinPlugin.pm | 51 
 1 file changed, 51 insertions(+)

diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
index 1d2e37c..2986b72 100644
--- a/src/PVE/Storage/LvmThinPlugin.pm
+++ b/src/PVE/Storage/LvmThinPlugin.pm
@@ -9,6 +9,7 @@ use PVE::Tools qw(run_command trim);
 use PVE::Storage::Plugin;
 use PVE::Storage::LVMPlugin;
 use PVE::JSONSchema qw(get_standard_option);
+use Data::Dumper;
 
 # see: man lvmthin
 # lvcreate -n ThinDataLV -L LargeSize VG
@@ -383,6 +384,56 @@ sub volume_has_feature {
 return undef;
 }
 
+sub volume_import {
+my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, 
$base_snapshot, $with_snapshots, $allow_rename) = @_;
+
+my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
+   $class->parse_volname($volname);
+
+if (!$isBase) {
+   return $class->SUPER::volume_import(
+   $scfg,
+   $storeid,
+   $fh,
+   $volname,
+   $format,
+   $snapshot,
+   $base_snapshot,
+   $with_snapshots,
+   $allow_rename
+   );
+} else {
+   my $tempname;
+   my $vg = $scfg->{vgname};
+   my $lvs = PVE::Storage::LVMPlugin::lvm_list_volumes($vg);
+   if ($lvs->{$vg}->{$volname}) {
+   die "volume $vg/$volname already exists\n" if !$allow_rename;
+   warn "volume $vg/$volname already exists - importing with a 
different name\n";
+
+   $tempname = $class->find_free_diskname($storeid, $scfg, $vmid);
+   } else {
+   $tempname = $volname;
+   $tempname =~ s/base/vm/;
+   }
+
+   ($storeid,my $newname) = 
PVE::Storage::parse_volume_id($class->SUPER::volume_import(
+   $scfg,
+   $storeid,
+   $fh,
+   $tempname,
+   $format,
+   $snapshot,
+   $base_snapshot,
+   $with_snapshots,
+   $allow_rename
+   ));
+
+   $volname = $class->create_base($storeid, $scfg, $newname);
+}
+
+return "$storeid:$volname";
+}
+
 # used in LVMPlugin->volume_import
 sub volume_import_write {
 my ($class, $input_fh, $output_file) = @_;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3 1/2] migration: secure and use source volume names for deactivation

2023-12-19 Thread Hannes Duerr
During migration, the volume names may change if the name is already in
use at the target location. We therefore want to save the original names
before the migration so that we can deactivate the original volumes
afterwards.

Signed-off-by: Hannes Duerr 
---
 PVE/QemuMigrate.pm | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index b87e47a..ec4710d 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -744,6 +744,11 @@ sub phase1 {
 $conf->{lock} = 'migrate';
 PVE::QemuConfig->write_config($vmid, $conf);
 
+PVE::QemuConfig->foreach_volume($conf, sub {
+   my ($ds, $drive) = @_;
+push $self->{source_volumes}->@*, $drive->{file};
+});
+
 $self->scan_local_volumes($vmid);
 
 # fix disk sizes to match their actual size and write changes,
@@ -1586,8 +1591,7 @@ sub phase3_cleanup {
 
 # always deactivate volumes - avoid lvm LVs to be active on several nodes
 eval {
-   my $vollist = PVE::QemuServer::get_vm_volumes($conf);
-   PVE::Storage::deactivate_volumes($self->{storecfg}, $vollist);
+   PVE::Storage::deactivate_volumes($self->{storecfg}, 
$self->{source_volumes});
 };
 if (my $err = $@) {
$self->log('err', $err);
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v4 1/2] migration: secure and use source volume names for deactivation

2023-12-19 Thread Hannes Duerr
During migration, the volume names may change if the name is already in
use at the target location. We therefore want to save the original names
so that we can deactivate the original volumes afterwards.

Signed-off-by: Hannes Duerr 
---
 PVE/QemuMigrate.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index b87e47a..8d9b35a 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -1455,6 +1455,8 @@ sub phase3_cleanup {
 
 my $tunnel = $self->{tunnel};
 
+my $sourcevollist = PVE::QemuServer::get_vm_volumes($conf);
+
 if ($self->{volume_map} && !$self->{opts}->{remote}) {
my $target_drives = $self->{target_drive};
 
@@ -1586,8 +1588,7 @@ sub phase3_cleanup {
 
 # always deactivate volumes - avoid lvm LVs to be active on several nodes
 eval {
-   my $vollist = PVE::QemuServer::get_vm_volumes($conf);
-   PVE::Storage::deactivate_volumes($self->{storecfg}, $vollist);
+   PVE::Storage::deactivate_volumes($self->{storecfg}, $sourcevollist);
 };
 if (my $err = $@) {
$self->log('err', $err);
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server/storage v4 0/2] fix #1611: implement import of base-images for LVM-thin Storage

2023-12-19 Thread Hannes Duerr
if a base-image is to be migrated to a lvm-thin storage, a new
vm-image is allocated on the target side, then the data is written
and afterwards the image is converted to a base-image


Changes in V2:
* restructure and remove duplicaiton
* fix deactivation of volumes after migration

Changes in V3:
* fix Nits
* remove unnecessary oldname override
* deactivate not only offline volumes, but all

Changes in V4:
* remove debug stuff
* remove unnecessary key in $self

qemu-server:

Hannes Duerr (1):
  migration: secure and use source volume names for deactivation

 PVE/QemuMigrate.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)


pve-storage:

Hannes Duerr (1):
  fix #1611: implement import of base-images for LVM-thin Storage

 src/PVE/Storage/LvmThinPlugin.pm | 50 
 1 file changed, 50 insertions(+)


Summary over all repositories:
  2 files changed, 53 insertions(+), 0 deletions(-)

-- 
Generated by git-murpp 0.5.0


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-storage v4 2/2] fix #1611: implement import of base-images for LVM-thin Storage

2023-12-19 Thread Hannes Duerr
for base images we call the volume_import of the parent plugin and pass
it as vm-image instead of base-image, then convert it back as base-image

Signed-off-by: Hannes Duerr 
---
 src/PVE/Storage/LvmThinPlugin.pm | 50 
 1 file changed, 50 insertions(+)

diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
index 1d2e37c..96f619b 100644
--- a/src/PVE/Storage/LvmThinPlugin.pm
+++ b/src/PVE/Storage/LvmThinPlugin.pm
@@ -383,6 +383,56 @@ sub volume_has_feature {
 return undef;
 }
 
+sub volume_import {
+my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, 
$base_snapshot, $with_snapshots, $allow_rename) = @_;
+
+my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
+   $class->parse_volname($volname);
+
+if (!$isBase) {
+   return $class->SUPER::volume_import(
+   $scfg,
+   $storeid,
+   $fh,
+   $volname,
+   $format,
+   $snapshot,
+   $base_snapshot,
+   $with_snapshots,
+   $allow_rename
+   );
+} else {
+   my $tempname;
+   my $vg = $scfg->{vgname};
+   my $lvs = PVE::Storage::LVMPlugin::lvm_list_volumes($vg);
+   if ($lvs->{$vg}->{$volname}) {
+   die "volume $vg/$volname already exists\n" if !$allow_rename;
+   warn "volume $vg/$volname already exists - importing with a 
different name\n";
+
+   $tempname = $class->find_free_diskname($storeid, $scfg, $vmid);
+   } else {
+   $tempname = $volname;
+   $tempname =~ s/base/vm/;
+   }
+
+   ($storeid,my $newname) = 
PVE::Storage::parse_volume_id($class->SUPER::volume_import(
+   $scfg,
+   $storeid,
+   $fh,
+   $tempname,
+   $format,
+   $snapshot,
+   $base_snapshot,
+   $with_snapshots,
+   $allow_rename
+   ));
+
+   $volname = $class->create_base($storeid, $scfg, $newname);
+}
+
+return "$storeid:$volname";
+}
+
 # used in LVMPlugin->volume_import
 sub volume_import_write {
 my ($class, $input_fh, $output_file) = @_;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-manager 1/1] report: add packet counter to iptables output

2024-01-03 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---

The additional information can help with debugging firewall rules, as
one can see how many times a specified rule got hit

 PVE/Report.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Report.pm b/PVE/Report.pm
index 2024285e..10b28c79 100644
--- a/PVE/Report.pm
+++ b/PVE/Report.pm
@@ -85,7 +85,7 @@ my $init_report_cmds = sub {
cmds => [
sub { dir2text('/etc/pve/firewall/', '.*fw') },
'cat /etc/pve/local/host.fw',
-   'iptables-save',
+   'iptables-save -c',
],
},
cluster => {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v7 0/1] fix #4957: add vendor and product information passthrough for SCSI-Disks

2024-01-10 Thread Hannes Duerr
changes in v2:
- when calling the API to create/update a VM, check whether the devices
are "scsi-hd" or "scsi-cd" devices,where there is the option to add
vendor and product information, if not error out
- change the format in product_fmt and vendor_fmt to a pattern that only allows
40 characters consisting of upper and lower case letters, numbers and '-' and
'_'.

changes in v3:
- splitup into preparation and fix patch
- move get_scsi_devicetype into QemuServer/Drive.pm
- refactor check_scsi_feature_compatibility to
  assert_scsi_feature_compatibility
- assert_scsi_feature_compatibility before creating the device
- handle 'local-lvm:' syntax in get_scsi_devicetype
- fix style issues

changes in v4:
- create assert_scsi_feature_compatibility() in API2/Qemu.pm
- divide the preparation into smaller steps
- remove or harden brittle regex
- fix wrong storagename assumption

changes in v5:
- fix copy/paste mistake

changes in v6:
- add whitespace to allowed characters for vendor and product
  information
- fix undefined subroutine errors
- fix nits

changes in v7:
- use PVE::QemuServer::Machine::extract_version() to avoid making the helper
public
- since the properties cannot be hotplugged, skip the properties during hotplugg
- reduce the amount of allowed characters due to restrictions in qemu

qemu-server:

Hannes Duerr (1):
  fix #4957: add vendor and product information passthrough for
SCSI-Disks

 PVE/API2/Qemu.pm| 39 +++
 PVE/QemuServer.pm   | 12 
 PVE/QemuServer/Drive.pm | 24 
 3 files changed, 75 insertions(+)


Summary over all repositories:
  3 files changed, 75 insertions(+), 0 deletions(-)

-- 
Generated by git-murpp 0.5.0


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v7 1/1] fix #4957: add vendor and product information passthrough for SCSI-Disks

2024-01-10 Thread Hannes Duerr
adds vendor and product information for SCSI devices to the json schema
and checks in the VM create/update API call if it is possible to add
these to QEMU as a device option

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 39 +++
 PVE/QemuServer.pm   | 12 
 PVE/QemuServer/Drive.pm | 24 
 3 files changed, 75 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9e3cfb5..8808ac5 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -696,6 +696,33 @@ my $check_vm_modify_config_perm = sub {
 return 1;
 };
 
+sub assert_scsi_feature_compatibility {
+my ($opt, $conf, $storecfg, $drive_attributes) = @_;
+
+my $drive = PVE::QemuServer::Drive::parse_drive($opt, $drive_attributes);
+
+my $machine_type = PVE::QemuServer::get_vm_machine($conf, undef, 
$conf->{arch});
+my $machine_version = PVE::QemuServer::Machine::extract_version(
+   $machine_type, PVE::QemuServer::kvm_user_version());
+my $drivetype = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
+
+if ($drivetype ne 'hd' && $drivetype ne 'cd') {
+   if ($drive->{product}) {
+   raise_param_exc({
+   product => "Passing of product information is only supported 
for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+   if ($drive->{vendor}) {
+   raise_param_exc({
+   vendor => "Passing of vendor information is only supported for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+}
+}
+
 __PACKAGE__->register_method({
 name => 'vmlist',
 path => '',
@@ -1013,6 +1040,13 @@ __PACKAGE__->register_method({
my $conf = $param;
my $arch = PVE::QemuServer::get_vm_arch($conf);
 
+
+   for my $opt (sort keys $param->%*) {
+   next if $opt !~ m/^scsi\d+$/;
+   assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+
$conf->{meta} = PVE::QemuServer::new_meta_info_string();
 
my $vollist = [];
@@ -1833,6 +1867,11 @@ my $update_vm_api  = sub {
PVE::QemuServer::vmconfig_register_unused_drive($storecfg, 
$vmid, $conf, PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt}))
if defined($conf->{pending}->{$opt});
 
+   if ($opt =~ m/^scsi\d+$/) {
+   assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+
my (undef, $created_opts) = $create_disks->(
$rpcenv,
$authuser,
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a6a118b..9ec4591 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1427,6 +1427,16 @@ sub print_drivedevice_full {
}
$device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
 
+   # only scsi-hd and scsi-cd support passing vendor and product 
information
+   if ($devicetype eq 'hd' || $devicetype eq 'cd') {
+   if (my $vendor = $drive->{vendor}) {
+   $device .= ",vendor=$vendor";
+   }
+   if (my $product = $drive->{product}) {
+   $device .= ",product=$product";
+   }
+   }
+
 } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
my $maxdev = ($drive->{interface} eq 'sata') ? 
$PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
my $controller = int($drive->{index} / $maxdev);
@@ -5359,8 +5369,10 @@ sub vmconfig_update_disk {
safe_string_ne($drive->{discard}, $old_drive->{discard}) ||
safe_string_ne($drive->{iothread}, $old_drive->{iothread}) 
||
safe_string_ne($drive->{queues}, $old_drive->{queues}) ||
+   safe_string_ne($drive->{product}, $old_drive->{product}) ||
safe_string_ne($drive->{cache}, $old_drive->{cache}) ||
safe_string_ne($drive->{ssd}, $old_drive->{ssd}) ||
+   safe_string_ne($drive->{vendor}, $old_drive->{vendor}) ||
safe_string_ne($drive->{ro}, $old_drive->{ro})) {
die "skip\n";
}
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 5747356..6064bea 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -163,6 +163,26 @@ my %iothread

[pve-devel] [PATCH pve-manager 1/1] add missing library packages

2024-01-12 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 PVE/API2/APT.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/PVE/API2/APT.pm b/PVE/API2/APT.pm
index f50a5347..54121ec2 100644
--- a/PVE/API2/APT.pm
+++ b/PVE/API2/APT.pm
@@ -788,9 +788,12 @@ __PACKAGE__->register_method({
libproxmox-backup-qemu0
libproxmox-rs-perl
libpve-access-control
+   libpve-cluster-api-perl
+   libpve-cluster-perl
libpve-common-perl
libpve-guest-common-perl
libpve-http-server-perl
+   livpve-notify-perl
libpve-rs-perl
libpve-storage-perl
libqb0
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 1/1] fix 1734: clone VM: if deactivation fails demote error to warning

2024-03-06 Thread Hannes Duerr
When a template with disks on LVM is cloned to another node, the storage
is first activated, then cloned and deactivated again after cloning.

However, if clones of this template are now created in parellel to other
nodes, it can happen that one of the tasks can no longer deactivate the
logical volume because it is still in use.  The reason for this is that
we use a shared lock.
Since the failed deactivation does not necessarily have consequences, we
downgrade the error to a warning, which means that the clone tasks will
continue to be completed successfully.

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 69c5896..f1e88b8 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -48,6 +48,7 @@ use PVE::DataCenterConfig;
 use PVE::SSHInfo;
 use PVE::Replication;
 use PVE::StorageTunnel;
+use PVE::RESTEnvironment qw(log_warn);
 
 BEGIN {
 if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -3820,7 +3821,13 @@ __PACKAGE__->register_method({
 
if ($target) {
# always deactivate volumes - avoid lvm LVs to be active on 
several nodes
-   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
$snapname) if !$running;
+   eval {
+   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
$snapname) if !$running;
+   };
+   my $err = $@;
+   if ($err) {
+   log_warn("$err\n");
+   }
PVE::Storage::deactivate_volumes($storecfg, $newvollist);
 
my $newconffile = PVE::QemuConfig->config_file($newid, 
$target);
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v2 1/1] fix 1734: clone VM: if deactivation fails demote error to warning

2024-03-06 Thread Hannes Duerr
When a template with disks on LVM is cloned to another node, the volumes
are first activated, then cloned and deactivated again after cloning.

However, if clones of this template are now created in parallel to other
nodes, it can happen that one of the tasks can no longer deactivate the
logical volume because it is still in use.  The reason for this is that
we use a shared lock.
Since the failed deactivation does not necessarily have consequences, we
downgrade the error to a warning, which means that the clone tasks will
continue to be completed successfully.

Signed-off-by: Hannes Duerr 
---
changes since v1:
- fix nits and spelling

 PVE/API2/Qemu.pm | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 69c5896..1ff5abe 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -48,6 +48,7 @@ use PVE::DataCenterConfig;
 use PVE::SSHInfo;
 use PVE::Replication;
 use PVE::StorageTunnel;
+use PVE::RESTEnvironment qw(log_warn);
 
 BEGIN {
 if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -3820,7 +3821,11 @@ __PACKAGE__->register_method({
 
if ($target) {
# always deactivate volumes - avoid lvm LVs to be active on 
several nodes
-   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
$snapname) if !$running;
+   eval {
+   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
$snapname) if !$running;
+   };
+   log_warn($@) if ($@);
+
PVE::Storage::deactivate_volumes($storecfg, $newvollist);
 
my $newconffile = PVE::QemuConfig->config_file($newid, 
$target);
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 1/1] snapshot: prohibit snapshot with ram if vm has a passthrough pci device

2024-03-19 Thread Hannes Duerr
When a snapshot is created with RAM, qemu attempts to save not only the
RAM content, but also the internal state of the PCI devices.

However, as not all drivers support this, this can lead to the device
drivers in the VM not being able to handle the saved state during the
restore/rollback and in conclusion the VM might crash. For this reason,
we now generally prohibit snapshots with RAM for VMs with passthrough
devices.

In the future, this prohibition can of course be relaxed for individual
drivers that we know support it, such as the vfio driver

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 40b6c30..0acd1c7 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -5101,6 +5101,16 @@ __PACKAGE__->register_method({
die "unable to use snapshot name 'pending' (reserved name)\n"
if lc($snapname) eq 'pending';
 
+   if ($param->{vmstate}) {
+   my $conf = PVE::QemuConfig->load_config($vmid);
+
+   for my $key (keys %$conf) {
+   next if $key !~ /^hostpci\d+/;
+   die "cannot snapshot VM with RAM due to passed-through PCI 
device(s), which lack"
+   ." the possibility to save/restore their internal state\n";
+   }
+   }
+
my $realcmd = sub {
PVE::Cluster::log_msg('info', $authuser, "snapshot VM $vmid: 
$snapname");
PVE::QemuConfig->snapshot_create($vmid, $snapname, 
$param->{vmstate},
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-storage 1/1] storage/plugin: implement ref-counting for disknames in get_next_vm_diskname

2024-04-03 Thread Hannes Duerr
As Fabian has already mentioned here[0], there can be a race between two
parallel imports. More specifically, if both imports have --allow-rename
set and the desired name already exists, then it can happen that both
imports get the same name. The reason for this is that we currently only
check which names have already been assigned in the vm config and then
use the next free one, and do not check whether a running process has
already been assigned the same name. However, while writing and testing
the patch, I found that this is often not a problem, as
- in most cases the source VM config is locked and therefore only one
  process can generate a new name (migrate, clone, move disk, update
  config)
- a name must be specified and therefore no new name is generated (pvesm
  alloc)
- the timeframe for the race is very short.

At the same time, it is possible that I have not considered all edge
cases and that there are other cases where the race can occur,
especially with regard to remote_migrations. You can provoke the problem
with two parallel imports on a host where local-lvm:vm-100-disk-0
already exists:

pvesm import local-lvm:vm-100-disk-0 raw+size  --allow-rename 1

Now that I've looked into the problem a bit, I'm not sure this patch is
even necessary as it adds more complexity. So I wanted to ask for your
opinion, wether you think it makes sense to add this change or not.

The patch introduces a tmp file which stores the newly assigned disk
names and the pid of the process which requested the disk name. If a
second process is assigned the same name, it will see from the file that
the name has already been assigned to another process, and will take the
next available name. Reading and writing to the tmp file requires a lock
to prevent races.

[0] https://lists.proxmox.com/pipermail/pve-devel/2024-January/061526.html

Signed-off-by: Hannes Duerr 
---
 src/PVE/Storage/Plugin.pm | 99 ++-
 1 file changed, 86 insertions(+), 13 deletions(-)

diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 7456c8e..f76550a 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -10,8 +10,10 @@ use File::chdir;
 use File::Path;
 use File::Basename;
 use File::stat qw();
+use File::Copy;
 
 use PVE::Tools qw(run_command);
+use PVE::ProcFSTools;
 use PVE::JSONSchema qw(get_standard_option register_standard_option);
 use PVE::Cluster qw(cfs_register_file);
 
@@ -779,23 +781,94 @@ my $get_vm_disk_number = sub {
 sub get_next_vm_diskname {
 my ($disk_list, $storeid, $vmid, $fmt, $scfg, $add_fmt_suffix) = @_;
 
-$fmt //= '';
-my $prefix = ($fmt eq 'subvol') ? 'subvol' : 'vm';
-my $suffix = $add_fmt_suffix ? ".$fmt" : '';
+my $code = sub {
+   my $reserved_names_file = "/var/tmp/pve-reserved-volnames";
+   my $tmp_file = "/var/tmp/pve-reserved-volnames.tmp";
 
-my $disk_ids = {};
-foreach my $disk (@$disk_list) {
-   my $disknum = $get_vm_disk_number->($disk, $scfg, $vmid, $suffix);
-   $disk_ids->{$disknum} = 1 if defined($disknum);
-}
+   $fmt //= '';
+   my $prefix = ($fmt eq 'subvol') ? 'subvol' : 'vm';
+   my $suffix = $add_fmt_suffix ? ".$fmt" : '';
+   my $disk_id;
+   my $disk_ids = {};
 
-for (my $i = 0; $i < $MAX_VOLUMES_PER_GUEST; $i++) {
-   if (!$disk_ids->{$i}) {
-   return "$prefix-$vmid-disk-$i$suffix";
+   foreach my $disk (@$disk_list) {
+   my $disknum = $get_vm_disk_number->($disk, $scfg, $vmid, $suffix);
+   $disk_ids->{$disknum} = 1 if defined($disknum);
}
-}
 
-die "unable to allocate an image name for VM $vmid in storage '$storeid'\n"
+   for (my $i = 0; $i < $MAX_VOLUMES_PER_GUEST; $i++) {
+   if (!$disk_ids->{$i}) {
+   $disk_id = $i;
+   last;
+   }
+   }
+
+   if (! -e $reserved_names_file) {
+   my $create_h = IO::File->new($reserved_names_file, "w") ||
+   die "can't open or create'$reserved_names_file' - $!\n";
+   print $create_h "$storeid $vmid $disk_id $$";
+   $create_h->close;
+
+   return "$prefix-$vmid-disk-$disk_id$suffix";
+   } else {
+   my $collision;
+   my $pid;
+
+   my $in_h = IO::File->new($reserved_names_file, "r") ||
+   die "can't open or create'$reserved_names_file' - $!\n";
+   my $out_h = IO::File->new($tmp_file, "w") ||
+   die "can't open or create'$tmp_file' - $!\n";
+
+   # remove entries when the process does not exist anymore
+   while (my $line = <$in_h>) {
+

[pve-devel] [PATCH qemu-server 1/1] fix #5365: drive: add drive_is_cloudinit check to get_scsi_devicetype

2024-04-09 Thread Hannes Duerr
When we obtain the devicetype, we check whether it is a CD drive.
Cloudinit drives are always allocated CD drives, but if the drive has
not yet been allocated, the test fails because the cd attribute has not
yet been set.
We therefore now explicitly check whether it is a cloudinit
drive that has not yet been allocated.

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer/Drive.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 34c6e87..c829bde 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -853,7 +853,7 @@ sub get_scsi_devicetype {
 
 my $devicetype = 'hd';
 my $path = '';
-if (drive_is_cdrom($drive)) {
+if (drive_is_cdrom($drive) || drive_is_cloudinit($drive)) {
$devicetype = 'cd';
 } else {
if ($drive->{file} =~ m|^/|) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v2 1/2] fix #5363: cloudinit: make creation of scsi cloudinit discs possible again

2024-04-10 Thread Hannes Duerr
Upon obtaining the device type, a check is performed to determine if it
is a CD drive. It is important to note that Cloudinit drives are always
assigned as CD drives. If the drive has not yet been allocated, the test
will fail due to the unset cd attribute.
To avoid this, an explicit check is now performed to determine if it is
a Cloudinit drive that has not yet been assigned.

The mentioned error was introduced by this patch:
https://lists.proxmox.com/pipermail/pve-devel/2024-January/061311.html

Signed-off-by: Hannes Duerr 
---
Changes since v1:
- fixed rephrased commit message
- added reference to the commit which introduced the bug

 PVE/QemuServer/Drive.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 34c6e87..c829bde 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -853,7 +853,7 @@ sub get_scsi_devicetype {
 
 my $devicetype = 'hd';
 my $path = '';
-if (drive_is_cdrom($drive)) {
+if (drive_is_cdrom($drive) || drive_is_cloudinit($drive)) {
$devicetype = 'cd';
 } else {
if ($drive->{file} =~ m|^/|) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v2 2/2] drive: improve readability to get_scsi_device_type

2024-04-10 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 2 +-
 PVE/QemuServer.pm   | 2 +-
 PVE/QemuServer/Drive.pm | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 497987f..dc44dee 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -751,7 +751,7 @@ sub assert_scsi_feature_compatibility {
 my $machine_type = PVE::QemuServer::get_vm_machine($conf, undef, 
$conf->{arch});
 my $machine_version = PVE::QemuServer::Machine::extract_version(
$machine_type, PVE::QemuServer::kvm_user_version());
-my $drivetype = PVE::QemuServer::Drive::get_scsi_devicetype(
+my $drivetype = PVE::QemuServer::Drive::get_scsi_device_type(
$drive, $storecfg, $machine_version);
 
 if ($drivetype ne 'hd' && $drivetype ne 'cd') {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6e2c805..bd375a2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1413,7 +1413,7 @@ sub print_drivedevice_full {
my $unit = $drive->{index} % $maxdev;
 
my $machine_version = extract_version($machine_type, 
kvm_user_version());
-   my $devicetype  = PVE::QemuServer::Drive::get_scsi_devicetype(
+   my $devicetype  = PVE::QemuServer::Drive::get_scsi_device_type(
$drive, $storecfg, $machine_version);
 
if (!$conf->{scsihw} || $conf->{scsihw} =~ m/^lsi/ || $conf->{scsihw} 
eq 'pvscsi') {
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index c829bde..6a4fafd 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -848,7 +848,7 @@ sub path_is_scsi {
 return $res;
 }
 
-sub get_scsi_devicetype {
+sub get_scsi_device_type {
 my ($drive, $storecfg, $machine_version) = @_;
 
 my $devicetype = 'hd';
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 proxmox-i18n] update German translation

2023-10-09 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 de.po | 111 --
 1 file changed, 37 insertions(+), 74 deletions(-)

diff --git a/de.po b/de.po
index fea74f1..4f6de7d 100644
--- a/de.po
+++ b/de.po
@@ -747,9 +747,8 @@ msgid "Authentication mode"
 msgstr "Authentifikationsmodus"
 
 #: proxmox-widget-toolkit/src/panel/SendmailEditPanel.js:111
-#, fuzzy
 msgid "Author"
-msgstr "Auth-ID"
+msgstr "Autor"
 
 #: pmg-gui/js/TFAView.js:60 pve-manager/www/manager6/dc/OptionView.js:241
 #: proxmox-backup/www/config/WebauthnView.js:109
@@ -1400,9 +1399,8 @@ msgstr "Testen"
 
 #: pve-manager/www/manager6/dc/AuthEditAD.js:93
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:93
-#, fuzzy
 msgid "Check connection"
-msgstr "Schutz ändern"
+msgstr "Teste Verbindung"
 
 #: pve-manager/www/manager6/window/DownloadUrlToStorage.js:188
 #: pve-manager/www/manager6/window/UploadToStorage.js:225
@@ -2100,9 +2098,8 @@ msgid "Current Auth ID"
 msgstr "Aktuelle Auth-ID"
 
 #: pve-manager/www/manager6/grid/PoolMembers.js:73
-#, fuzzy
 msgid "Current Pool"
-msgstr "Aktuelles Layout"
+msgstr "Aktueller Bestand"
 
 #: proxmox-backup/www/tape/window/TapeRestore.js:431
 msgid "Current User"
@@ -2674,11 +2671,10 @@ msgid "Disabled"
 msgstr "Deaktiviert"
 
 #: pve-manager/www/manager6/dc/NotificationEvents.js:32
-#, fuzzy
 msgid "Disabling notifications is not recommended for production systems!"
 msgstr ""
-"Das {0}no-subscription Repository ist nicht für die Verwendung in "
-"Produktivsystemen empfohlen!"
+"Das Deaktivieren von Benachrichtigungen in Produktivsystemen ist"
+"nicht empfohlen!"
 
 #: pve-manager/www/manager6/qemu/RNGEdit.js:90
 msgid ""
@@ -2789,9 +2785,8 @@ msgid "Do not use any media"
 msgstr "Kein Medium verwenden"
 
 #: proxmox-widget-toolkit/src/panel/NotificationConfigView.js:91
-#, fuzzy
 msgid "Do you want to send a test notification to '{0}'?"
-msgstr "Möchten Sie Replikation Job {0} wirklich entfernen?"
+msgstr "Möchten Sie eine Testbenachrichtigung an '{0}' senden?"
 
 #: pmg-gui/js/MainView.js:187 pve-manager/www/manager6/Workspace.js:352
 #: proxmox-backup/www/MainView.js:226
@@ -3287,9 +3282,8 @@ msgstr "Endzeit"
 #: proxmox-widget-toolkit/src/panel/GotifyEditPanel.js:16
 #: proxmox-widget-toolkit/src/panel/NotificationGroupEditPanel.js:67
 #: proxmox-widget-toolkit/src/panel/SendmailEditPanel.js:27
-#, fuzzy
 msgid "Endpoint Name"
-msgstr "Bind-Domänenname"
+msgstr "Endpointname"
 
 #: proxmox-widget-toolkit/src/Utils.js:70
 msgid "English"
@@ -3737,9 +3731,8 @@ msgstr "Filter"
 
 #: proxmox-widget-toolkit/src/panel/NotificationConfigView.js:265
 #: proxmox-widget-toolkit/src/window/NotificationFilterEdit.js:14
-#, fuzzy
 msgid "Filter Name"
-msgstr "Cluster-Name"
+msgstr "Filtername"
 
 #: proxmox-backup/www/form/GroupFilter.js:281
 msgid "Filter Type"
@@ -3747,7 +3740,7 @@ msgstr "Filtertyp"
 
 #: pve-manager/www/manager6/grid/BackupView.js:150
 msgid "Filter VMID"
-msgstr "Filtere VMID"
+msgstr "Filter-VMID"
 
 #: proxmox-backup/www/form/GroupFilter.js:291
 msgid "Filter Value"
@@ -3966,9 +3959,8 @@ msgid "From"
 msgstr "Von"
 
 #: proxmox-widget-toolkit/src/panel/SendmailEditPanel.js:121
-#, fuzzy
 msgid "From Address"
-msgstr "Front Adresse"
+msgstr "Von Adresse"
 
 #: pve-manager/www/manager6/window/Restore.js:260
 msgid "From Backup"
@@ -4137,9 +4129,8 @@ msgid "Group Guest Types"
 msgstr "Gruppiere Gast-Typ"
 
 #: proxmox-widget-toolkit/src/panel/NotificationGroupEditPanel.js:16
-#, fuzzy
 msgid "Group Name"
-msgstr "Gruppenmitglied"
+msgstr "Gruppenname"
 
 #: pve-manager/www/manager6/dc/ACLView.js:26
 #: pve-manager/www/manager6/dc/ACLView.js:199
@@ -4481,7 +4472,6 @@ msgid "IOMMU Group"
 msgstr "IOMMU-Gruppe"
 
 #: pve-manager/www/manager6/dc/PCIMapView.js:88
-#, fuzzy
 msgid "IOMMU-Group"
 msgstr "IOMMU-Gruppe"
 
@@ -5475,9 +5465,8 @@ msgid "Manufacturer"
 msgstr "Hersteller"
 
 #: pve-manager/www/manager6/qemu/PCIEdit.js:193
-#, fuzzy
 msgid "Mapped Device"
-msgstr "Gemapptes Devices"
+msgstr "Gemapptes Device"
 
 #: pve-manager/www/manager6/form/PCIMapSelector.js:52
 #: pve-manager/www/manager6/form/USBMapSelector.js:37
@@ -5646,9 +5635,8 @@ msgid "Memory usage"
 msgstr "Speicherverbrauch"
 
 #: pve-manager/www/manager6/ceph/OSDDetails.js:151
-#, fuzzy
 msgid "Memory usage (PSS)"

[pve-devel] [PATCH v3 proxmox-i18n] update German translation

2023-10-10 Thread Hannes Duerr
update German translation

Signed-off-by: Hannes Duerr 
---

Hab die Änderungen aufgenommen

 de.po | 121 +++---
 1 file changed, 40 insertions(+), 81 deletions(-)

diff --git a/de.po b/de.po
index fea74f1..e5202af 100644
--- a/de.po
+++ b/de.po
@@ -747,9 +747,8 @@ msgid "Authentication mode"
 msgstr "Authentifikationsmodus"
 
 #: proxmox-widget-toolkit/src/panel/SendmailEditPanel.js:111
-#, fuzzy
 msgid "Author"
-msgstr "Auth-ID"
+msgstr "Autor"
 
 #: pmg-gui/js/TFAView.js:60 pve-manager/www/manager6/dc/OptionView.js:241
 #: proxmox-backup/www/config/WebauthnView.js:109
@@ -1400,9 +1399,8 @@ msgstr "Testen"
 
 #: pve-manager/www/manager6/dc/AuthEditAD.js:93
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:93
-#, fuzzy
 msgid "Check connection"
-msgstr "Schutz ändern"
+msgstr "Teste Verbindung"
 
 #: pve-manager/www/manager6/window/DownloadUrlToStorage.js:188
 #: pve-manager/www/manager6/window/UploadToStorage.js:225
@@ -2100,9 +2098,8 @@ msgid "Current Auth ID"
 msgstr "Aktuelle Auth-ID"
 
 #: pve-manager/www/manager6/grid/PoolMembers.js:73
-#, fuzzy
 msgid "Current Pool"
-msgstr "Aktuelles Layout"
+msgstr "Aktueller Bestand"
 
 #: proxmox-backup/www/tape/window/TapeRestore.js:431
 msgid "Current User"
@@ -2674,11 +2671,10 @@ msgid "Disabled"
 msgstr "Deaktiviert"
 
 #: pve-manager/www/manager6/dc/NotificationEvents.js:32
-#, fuzzy
 msgid "Disabling notifications is not recommended for production systems!"
 msgstr ""
-"Das {0}no-subscription Repository ist nicht für die Verwendung in "
-"Produktivsystemen empfohlen!"
+"Das Deaktivieren von Benachrichtigungen in Produktivsystemen ist"
+"nicht empfohlen!"
 
 #: pve-manager/www/manager6/qemu/RNGEdit.js:90
 msgid ""
@@ -2789,9 +2785,8 @@ msgid "Do not use any media"
 msgstr "Kein Medium verwenden"
 
 #: proxmox-widget-toolkit/src/panel/NotificationConfigView.js:91
-#, fuzzy
 msgid "Do you want to send a test notification to '{0}'?"
-msgstr "Möchten Sie Replikation Job {0} wirklich entfernen?"
+msgstr "Möchten Sie eine Testbenachrichtigung an '{0}' senden?"
 
 #: pmg-gui/js/MainView.js:187 pve-manager/www/manager6/Workspace.js:352
 #: proxmox-backup/www/MainView.js:226
@@ -3287,9 +3282,8 @@ msgstr "Endzeit"
 #: proxmox-widget-toolkit/src/panel/GotifyEditPanel.js:16
 #: proxmox-widget-toolkit/src/panel/NotificationGroupEditPanel.js:67
 #: proxmox-widget-toolkit/src/panel/SendmailEditPanel.js:27
-#, fuzzy
 msgid "Endpoint Name"
-msgstr "Bind-Domänenname"
+msgstr "Endpointname"
 
 #: proxmox-widget-toolkit/src/Utils.js:70
 msgid "English"
@@ -3737,9 +3731,8 @@ msgstr "Filter"
 
 #: proxmox-widget-toolkit/src/panel/NotificationConfigView.js:265
 #: proxmox-widget-toolkit/src/window/NotificationFilterEdit.js:14
-#, fuzzy
 msgid "Filter Name"
-msgstr "Cluster-Name"
+msgstr "Filtername"
 
 #: proxmox-backup/www/form/GroupFilter.js:281
 msgid "Filter Type"
@@ -3966,9 +3959,8 @@ msgid "From"
 msgstr "Von"
 
 #: proxmox-widget-toolkit/src/panel/SendmailEditPanel.js:121
-#, fuzzy
 msgid "From Address"
-msgstr "Front Adresse"
+msgstr "Von Adresse"
 
 #: pve-manager/www/manager6/window/Restore.js:260
 msgid "From Backup"
@@ -4089,9 +4081,8 @@ msgid "Global flags limiting the self healing of Ceph are 
enabled."
 msgstr "Globale Flags schränken das Selbstheilen von Ceph ein."
 
 #: proxmox-widget-toolkit/src/Schema.js:47
-#, fuzzy
 msgid "Gotify"
-msgstr "Benachrichtigungen"
+msgstr "Gotify"
 
 #: proxmox-widget-toolkit/src/panel/PermissionView.js:144
 #: pve-manager/www/manager6/dc/PermissionView.js:144
@@ -4137,9 +4128,8 @@ msgid "Group Guest Types"
 msgstr "Gruppiere Gast-Typ"
 
 #: proxmox-widget-toolkit/src/panel/NotificationGroupEditPanel.js:16
-#, fuzzy
 msgid "Group Name"
-msgstr "Gruppenmitglied"
+msgstr "Gruppenname"
 
 #: pve-manager/www/manager6/dc/ACLView.js:26
 #: pve-manager/www/manager6/dc/ACLView.js:199
@@ -4481,7 +4471,6 @@ msgid "IOMMU Group"
 msgstr "IOMMU-Gruppe"
 
 #: pve-manager/www/manager6/dc/PCIMapView.js:88
-#, fuzzy
 msgid "IOMMU-Group"
 msgstr "IOMMU-Gruppe"
 
@@ -5475,9 +5464,8 @@ msgid "Manufacturer"
 msgstr "Hersteller"
 
 #: pve-manager/www/manager6/qemu/PCIEdit.js:193
-#, fuzzy
 msgid "Mapped Device"
-msgstr "Gemapptes Devices"
+msgstr "Gemapptes Device"
 
 #: pve-manager/www/manager6/form/PCIMapSelector.js:52
 #: pve-manager/www/manager6/

Re: [pve-devel] [PATCH pve-docs v3 18/18] firewall: add documentation for forward direction

2024-11-13 Thread Hannes Duerr
I am still not really conviced about the 'zone', but this does not have 
to change with this series.

I like the other changes, but I think there are some minor issues.

On 12.11.24 13:26, Stefan Hanreich wrote:

diff --git a/pve-firewall.adoc b/pve-firewall.adoc
index b428703..d5c664f 100644
--- a/pve-firewall.adoc
+++ b/pve-firewall.adoc
@@ -48,18 +48,34 @@ there is no need to maintain a different set of rules for 
IPv6.
  Zones
  -
  
-The Proxmox VE firewall groups the network into the following logical zones:

+The Proxmox VE firewall groups the network into the following logical zones.
+Depending on the zone, you can define firewall rules for incoming, outgoing or
+forwarded traffic.
  
  Host::
  
-Traffic from/to a cluster node

+Traffic going from/to a host or traffic that is forwarded by a host.
+
+You can define rules for this zone either at the datacenter level or at the 
node
+level. Rules at node level take precedence over rules at datacenter level.

If I am too picky please tell me:
First we talk about traffic through the 'host' and then we switch to 
talking about 'node level'.

Shouldn't we at least stick with one word? I think this can confuse users.

  
  VM::
  
-Traffic from/to a specific VM

+Traffic going from/to a VM or CT.
+
+You cannot define rules for the forward direction, only for incoming / 
outgoing.

Isn't the word 'traffic' missing at the end?

+
+VNet::
  
-For each zone, you can define firewall rules for incoming and/or

-outgoing traffic.
+Traffic passing through a SDN VNet, either from guest to guest or from host to
+guest and vice-versa. Since this traffic is always forwarded traffic, it is 
only
I think the verb is missing in this sentence also i'd change the 
structure to:
Traffic is passing trough a SDN VNet, either from guest to guest, from 
host to guest or vice-versa.

+possible to create rules with direction forward.
+
+
+IMPORTANT: Creating rules for forwarded traffic or on a VNet-level is currently
+only possible when using the new
+xref:pve_firewall_nft[nftables-based proxmox-firewall]. Any forward rules will 
be
+ignored by the stock `pve-firewall` and have no effect!



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmx-nvidia-vgpu-helper 2/2] add script to help with the installation of the nvidia vgpu dependencies

2024-11-20 Thread Hannes Duerr
The script should help with the dependency installation for the nvidia
vgpu driver, also if the driver is already installed but the system has
been updated

Signed-off-by: Hannes Duerr 
---
 pve-install-nvidia-vgpu-deps | 66 
 1 file changed, 66 insertions(+)
 create mode 100755 pve-install-nvidia-vgpu-deps

diff --git a/pve-install-nvidia-vgpu-deps b/pve-install-nvidia-vgpu-deps
new file mode 100755
index 000..fc0856e
--- /dev/null
+++ b/pve-install-nvidia-vgpu-deps
@@ -0,0 +1,66 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use PVE::Tools qw(run_command);
+use AptPkg::Cache;
+
+my @apt_install = qw(apt-get --no-install-recommends -o 
Dpkg:Options::=--force-confnew install --);
+my @dependencies = qw(dkms libc6-dev);
+my @missing_packages;
+
+die "Please execute the script with root privileges\n" if $>;
+
+my $apt_cache = AptPkg::Cache->new();
+die "unable to initialize AptPkg::Cache\n" if !$apt_cache; 
+
+sub package_is_installed {
+my ($package) = @_;
+my $p = $apt_cache->{$package};
+if (!defined($p->{CurrentState}) || $p->{CurrentState} ne "Installed") {
+   push(@missing_packages, $package);
+}
+}
+
+foreach my $dependency (@dependencies) {
+package_is_installed($dependency);
+}
+
+
+my $running_kernel;
+run_command( ['/usr/bin/uname', '-r' ],
+outfunc => sub { $running_kernel = shift } );
+
+my $default_major_minor_version;
+run_command(['/usr/bin/dpkg-query', '-f', '${Depends}', '-W', 
'proxmox-default-kernel'],
+outfunc => sub { $default_major_minor_version = shift } );
+
+my $default_full_version;
+run_command(['/usr/bin/dpkg-query', '-f', '${Version}', '-W', 
$default_major_minor_version],
+outfunc => sub { $default_full_version = shift } );
+
+if ($running_kernel =~ /$default_full_version-pve/) {
+print "You are running the proxmox default kernel 
`proxmox-kernel-$running_kernel`\n";
+package_is_installed("proxmox-default-headers");
+} elsif ($running_kernel =~ /pve/) {
+print "You are running the non default proxmox kernel 
`proxmox-kernel-$running_kernel`\n";
+package_is_installed("proxmox-headers-$running_kernel");
+} else {
+die "You are not using a proxmox-kernel, please make sure that the 
appropriate header package is installed.\n";
+}
+
+if (!@missing_packages){
+print "All required packages are installed, you can continue with the 
Nvidia vGPU driver installation.\n";
+exit;
+} else {
+print "The following packages are missing:\n" . join("\n", 
@missing_packages) ."\n";
+print "Would you like to install them now (y/n)?\n";
+}
+
+my $answer = ;
+if (defined($answer) && $answer =~ m/^\s*y(?:es)?\s*$/i) {
+if (system(@apt_install, @missing_packages) != 0) {
+   die "apt failed during the installation: ($?)\n";
+}
+}
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmx-nvidia-vgpu-helper 1/2] debian/control: adjust description and pve-manager dependency

2024-11-20 Thread Hannes Duerr
remove the dependency of proxmox-dkms, since this package does not
exist, and add the dependency of pve-manager, which should be installed
with every reasonable Proxmox VE installation, so that the package can
already be installed during the installation.

Signed-off-by: Hannes Duerr 
---
 debian/control | 11 +--
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/debian/control b/debian/control
index 1c19a50..f119bdc 100644
--- a/debian/control
+++ b/debian/control
@@ -8,11 +8,10 @@ Homepage: https://www.proxmox.com
 
 Package: proxmox-nvidia-vgpu-helper
 Architecture: all
-Depends: proxmox-dkms,
+Depends: libapt-pkg-perl,
+ pve-manager,
  ${misc:Depends},
-Description: Proxmox Nvidia vGPU systemd service
- This package helps with the configuration of Nvidia vGPU
- drivers by providing a systemd template service which
+Description: Proxmox Nvidia vGPU helper script and systemd service
+ This package provides a script, that helps with installing all required
+ packages for the Nvidia vGPU driver, and also a systemd template service which
  configures the option SRI-OV per pci-id
-
-
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmx-nvidia-vgpu-helper 0/2] reduce setup steps for nvidia vgpu drivers

2024-11-20 Thread Hannes Duerr
The patches apply to the repository proxmox-nvidia-vgpu-helper which
is currently only available in my staff folder
`staff/h.duerr/proxmox-nvidia-vgpu-helper`. The aim of the repository
is to reduce the necessary installation steps for the Nvidia VGPU
drivers [0]. The repository contains an install script which can be
used to check and install necessary dependencies and a systemd
template service which can be used to configure the SR-IOV per pci-id

Part of the changes would later be the adjustment of the wiki page

[0] https://pve.proxmox.com/wiki/NVIDIA_vGPU_on_Proxmox_VE


Hannes Duerr (2):
  debian/control: adjust description and dependency to new purpose
  add script to help with the installation of the nvidia vgpu
dependencies

 debian/control   | 11 +++---
 pve-install-nvidia-vgpu-deps | 66 
 2 files changed, 71 insertions(+), 6 deletions(-)
 create mode 100755 pve-install-nvidia-vgpu-deps

-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs/firewall/manager/network/proxmox{-ve-rs, -firewall} v3 00/18] add forward chain firewalling for hosts and vnets

2024-11-15 Thread Hannes Duerr

Tested the series also containing some changes of V4.

Tested-by: Hannes Dürr 

On 12.11.24 13:25, Stefan Hanreich wrote:

## Introduction

This patch series introduces a new direction for firewall rules: forward.
Additionally this patch series introduces defining firewall rules on a vnet
level.

## Use Cases

For hosts:
* hosts utilizing NAT can define firewall rules for NATed traffic
* hosts utilizing EVPN zones can define rules for exit node traffic
* hosts acting as gateway can firewall the traffic that passes through them

For vnets:
* can create firewall rules globally without having to attach/update security
   groups to every newly created VM

This patch series is particularly useful when combined with my other current RFC
'autogenerate ipsets for sdn objects'. It enables users to quickly define rules
like:

on the host level:
* only SNAT HTTP traffic from hosts in this vnet to a specific host
* restricting traffic routed from hosts in one vnet to another vnet

on the vnet level:
* only allow DHCP/DNS traffic inside a bridge to the gateway

Not only does this streamline creating firewall rules, it also enables users to
create firewall rules that haven't been possible before and needed to rely on
external firewall appliances.

Since forwarded traffic goes *both* ways, you generally have to create two rules
in case of bi-directional traffic. It might make sense to simplify this in the
future by adding an additional option to the firewall config scheme that
specifies that rules in the other direction should also get automatically
generated.

## Usage

For creating forward rules on the cluster/host level, you simply create a new
rule with the new 'forward' direction. It uses the existing configuration files.

For creating them on a vnet level, there are new firewall configuration files
located under '/etc/pve/sdn/firewall/.fw'. It utilizes the same
configuration format as the existing firewall configuration files. You can only
define rules with direction 'forward' on a vnet-level.

## Dependencies

depends on my other patch series 'autogenerate ipsets for sdn objects', further
instruction can be found there.

Furthermore:
* proxmox-firewall depends on proxmox-ve-rs
* pve-manager depends on pve-firewall
* pve-network depends on pve-firewall

Changes from v2 to v3:
* do not allow REJECT rules in forward chains in UI and backend - thanks @Hannes
* use arrow syntax for calling functions instead of &$ - thanks @Hannes
* set width of new VNet firewall panel via flex, to avoid weird looking panel -
   thanks @Hannes
* improve documentation - thanks @Hannes
* show a warning in the frontend when creating forward rules - thanks @Thomas

Changes from RFC to v2:
* Fixed several bugs
 * SDN Firewall folder does not automatically created (thanks @Gabriel)
 * Firewall flushes the bridge table if no guest firewall is active, even
   though VNet-level rules exist
* VNet-level firewall now matches on both input and output interface
* Introduced log option for VNet firewall
* Improved style of perl code (thanks @Thomas)
* promox-firewall now verifies the directions of rules
 * added some additional tests to verify this behavior
* added documentation

proxmox-ve-rs:

Stefan Hanreich (4):
   firewall: add forward direction
   firewall: add bridge firewall config parser
   config: firewall: add tests for interface and directions
   host: add struct representing bridge names

  proxmox-ve-config/src/firewall/bridge.rs | 64 +++
  proxmox-ve-config/src/firewall/cluster.rs| 11 
  proxmox-ve-config/src/firewall/common.rs | 11 
  proxmox-ve-config/src/firewall/guest.rs  | 66 
  proxmox-ve-config/src/firewall/host.rs   | 12 +++-
  proxmox-ve-config/src/firewall/mod.rs|  1 +
  proxmox-ve-config/src/firewall/types/rule.rs | 10 ++-
  proxmox-ve-config/src/host/mod.rs|  1 +
  proxmox-ve-config/src/host/types.rs  | 46 ++
  9 files changed, 219 insertions(+), 3 deletions(-)
  create mode 100644 proxmox-ve-config/src/firewall/bridge.rs
  create mode 100644 proxmox-ve-config/src/host/types.rs


proxmox-firewall:

Stefan Hanreich (4):
   nftables: derive additional traits for nftables types
   sdn: add support for loading vnet-level firewall config
   sdn: create forward firewall rules
   use std::mem::take over drain()

  .../resources/proxmox-firewall.nft|  54 
  proxmox-firewall/src/config.rs|  88 -
  proxmox-firewall/src/firewall.rs  | 122 +-
  proxmox-firewall/src/rule.rs  |   7 +-
  proxmox-firewall/tests/integration_tests.rs   |  12 ++
  .../integration_tests__firewall.snap  |  86 
  proxmox-nftables/src/expression.rs|   8 ++
  proxmox-nftables/src/types.rs |  14 +-
  8 files changed, 378 insertions(+), 13 deletions(-)


pve-firewall:

Stefan Hanreich (3):
   sdn: add vnet fir

Re: [pve-devel] [PATCH docs/firewall/manager/proxmox{-ve-rs, -firewall, -perl-rs} v3 00/24] autogenerate ipsets for sdn objects

2024-11-15 Thread Hannes Duerr
I tested this series in combination with the second patch series [0] 
implementing the forward chain, also containing some changes made in v4.


My test setup consisted of two clustered virtual Proxmox VE nodes.
I created a simple zone with vnet (no snat, no vlan aware), subnet and 
dhcp range.

I created a VLAN zone with a vlan aware vnet on top of vmbr0.
In the simple zone i created a second vnet with snat (no vlan aware), 
subnet and dhcp range.
Then I installed dnsmasq and enabled our pve IPAM Plugin. Afterwards i 
created one CT(guest100) and one VM(guest101).
I enabled the firewall on DC and Host level for both hosts and set it to 
nftables the rest was default settings.


1. I put both CT and VM on the same host in the simple zone, both 
configured to get dhcp addresses assigned. The assignment worked and 
also ip automatic ipset generation.
2. I enabled the Vnet firewall for all 3 Vnets (SDN -> Firewall -> Vnet 
select -> Options -> Firewall enable)
3. I created a forward rule on vnet level dropping every traffic between 
guest100 and guest101, which worked.
4. I switched the host firewall to iptables, the traffic flowed again as 
expected.
5. I switched back to nftables and disabled the rule, then i switched 
the default behavior to `drop` in Datacenter -> Firewall -> Forward 
which worked as well.
6. I switched default behavior back to `accept` and set the default 
behavior of the vnet to `drop` (SDN -> Firewall -> Vnet selecten -> 
Options -> Forward Policy).

7. I switched the setting back to forward.
8. I put bot CT and VM in the VLAN zone with static IP adresses and also 
created ipsets for the CT and VM.
9. I created a forward rule on vnet level dropping every traffic between 
guest 100 and 101, which worked (both hosts still on the same host)
10. I migrated guest 101 to the second host and they are still unable to 
communicate, as expected
15. I moved the guest101 into the snat vnet and pinged into the web to 
check if snat is working.

16. i created a rule dropping all traffic from all hosts to the vnet.

Looks good to me, please add my tested-by to both series.

Tested-by: Hannes Dürr 

[0] 
https://lore.proxmox.com/pve-devel/20241112122615.88854-1-s.hanre...@proxmox.com/T/#m646bd4b0be7652b2cc8afc411e6c96366ddb9a14


On 12.11.24 13:25, Stefan Hanreich wrote:

This patch series adds support for autogenerating ipsets for SDN objects. It
autogenerates ipsets for every VNet as follows:

* ipset containing all IP ranges of the VNet
* ipset containing all gateways of the VNet
* ipset containing all IP ranges of the subnet - except gateways
* ipset containing all dhcp ranges of the vnet

Additionally it generates an IPSet for every guest that has one or more IPAM
entries in the pve IPAM.

Those can then be used in the cluster / host / guest firewalls. Firewall rules
automatically update on changes of the SDN / IPAM configuration. This patch
series works for the old firewall as well as the new firewall.

The ipsets in nftables currently get generated as named ipsets in every table,
this means that the `nft list ruleset` output can get quite crowded for large
SDN configurations or large IPAM databases. Another option would be to only
include them as anonymous IPsets in the rules, which would make the nft output
far less crowded but this way would use more memory when making extensive use of
the sdn ipsets, since everytime it is used in a rule we create an entirely new
ipset.

The base for proxmox-ve-rs (which is a filtered version of the proxmox-firewall
repository can be found here:)

staff/s.hanreich/proxmox-ve-rs.git master

Dependencies:
* proxmox-perl-rs and proxmox-firewall depend on proxmox-ve-rs
* pve-firewall depends on proxmox-perl-rs
* pve-manager depends on pve-firewall

Changes from v2:
* rename end in IpRange to last to avoid confusion - thanks @Wolfgang
* bump Rust to 1.82 - thanks @Wolfgang
* improvements to the code generating IPSets - thanks @Wolfgang
* implement AsRef for SDN name types - thanks @Wolfgang
* improve docstrings (proper capitalization and punctuation) - thanks @Wolfgang
* included a patch that removes proxmox-ve-config from proxmox-firewall

Changes from RFC:
* added documentation
* added separate SDN scope for IPSets
* rustfmt fixes

proxmox-ve-rs:

Stefan Hanreich (16):
   debian: add files for packaging
   firewall: add sdn scope for ipsets
   firewall: add ip range types
   firewall: address: use new iprange type for ip entries
   ipset: add range variant to addresses
   iprange: add methods for converting an ip range to cidrs
   ipset: address: add helper methods
   firewall: guest: derive traits according to rust api guidelines
   common: add allowlist
   sdn: add name types
   sdn: add ipam module
   sdn: ipam: add method for generating ipsets
   sdn: add config module
   sdn: config: add method for generating ipsets
   tests: add sdn config tests
   tests: add ipam tests

  .cargo/config.toml|5 +
  .gitignore 

[pve-devel] [PATCH pve-nvidia-vgpu-helper v3 1/4] create a debian package to make the installation of Nvidia vGPU drivers more convenient

2025-02-10 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/changelog |  5 +
 debian/control   | 15 +++
 debian/copyright | 14 ++
 debian/rules |  8 
 debian/source/format |  1 +
 5 files changed, 43 insertions(+)
 create mode 100644 debian/changelog
 create mode 100644 debian/control
 create mode 100644 debian/copyright
 create mode 100755 debian/rules
 create mode 100644 debian/source/format

diff --git a/debian/changelog b/debian/changelog
new file mode 100644
index 000..de5e10a
--- /dev/null
+++ b/debian/changelog
@@ -0,0 +1,5 @@
+pve-nvidia-vgpu-helper (8.3.3) UNRELEASED; urgency=medium
+
+  * Initial release.
+
+ -- Proxmox Support Team   Mon, 20 Jan 2025 17:02:52 +0100
diff --git a/debian/control b/debian/control
new file mode 100644
index 000..334bf25
--- /dev/null
+++ b/debian/control
@@ -0,0 +1,15 @@
+Source: pve-nvidia-vgpu-helper
+Section: admin
+Priority: optional
+Maintainer: Proxmox Support Team 
+Build-Depends: debhelper-compat (= 13), lintian,
+Standards-Version: 4.6.2
+Homepage: https://www.proxmox.com
+
+Package: pve-nvidia-vgpu-helper
+Architecture: all
+Depends: ${misc:Depends},
+Description: Proxmox Nvidia vGPU helper script and systemd service
+ This package provides a script, that helps with installing all required
+ packages for the Nvidia vGPU driver, and also a systemd template service which
+ configures the option SRI-OV per pci-id
diff --git a/debian/copyright b/debian/copyright
new file mode 100644
index 000..046356b
--- /dev/null
+++ b/debian/copyright
@@ -0,0 +1,14 @@
+Copyright (C) 2016 - 2024 Proxmox Server Solutions GmbH 
+
+   This program is free software: you can redistribute it and/or modify
+   it under the terms of the GNU Affero General Public License as
+   published by the Free Software Foundation, either version 3 of the
+   License, or (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU Affero General Public License for more details.
+
+   You should have received a copy of the GNU Affero General Public License
+   along with this program.  If not, see <https://www.gnu.org/licenses/>.
diff --git a/debian/rules b/debian/rules
new file mode 100755
index 000..218df65
--- /dev/null
+++ b/debian/rules
@@ -0,0 +1,8 @@
+#!/usr/bin/make -f
+# -*- makefile -*-
+
+# Uncomment this to turn on verbose mode.
+#export DH_VERBOSE=1
+
+%:
+   dh $@
diff --git a/debian/source/format b/debian/source/format
new file mode 100644
index 000..89ae9db
--- /dev/null
+++ b/debian/source/format
@@ -0,0 +1 @@
+3.0 (native)
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v3 2/4] debian/control: add dependency for helper script

2025-02-10 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/control | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/debian/control b/debian/control
index 334bf25..352e63a 100644
--- a/debian/control
+++ b/debian/control
@@ -8,7 +8,9 @@ Homepage: https://www.proxmox.com
 
 Package: pve-nvidia-vgpu-helper
 Architecture: all
-Depends: ${misc:Depends},
+Depends: libapt-pkg-perl,
+ libdpkg-perl,
+ ${misc:Depends},
 Description: Proxmox Nvidia vGPU helper script and systemd service
  This package provides a script, that helps with installing all required
  packages for the Nvidia vGPU driver, and also a systemd template service which
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager/nvidia-vgpu-helper v3 0/5] reduce setup steps for nvidia vgpu drivers

2025-02-10 Thread Hannes Duerr
Changes in v4:
in commits

Changes in v3:
* install headers for every installed kernel version by default
* additionally add patch to only install headers for running kernel
  version and newer ones, this requires the new dependency
  "libdpkg-perl"
* remove unnecessary intrusive "Dpkg:Options::=--force-confnew"
* rename systemd template unit to "pve-nvidia-sriov@.service"
* check if path "/usr/lib/nvidia/sriov-manage" exists in systemd
  template unit


Changes in v2:
* patches contain all changes to build new repository
* make pve-manager depend on this package instead of the other way around
* install the script to /usr/bin/
* rename the script to pve-nvidia-vgpu-helper because it is only
  relevant for PVE(the repository should therefore also be renamed
  when created)

The aim of the repository is to reduce the necessary installation
steps for the Nvidia VGPU drivers [0]. The package installs a script
which can be used to check and install necessary dependencies and a
systemd template service which can be used to configure the SR-IOV per
pci-id

Part of the changes would later be the adjustment of the wiki page

[0] https://pve.proxmox.com/wiki/NVIDIA_vGPU_on_Proxmox_VE

pve-nvidia-vgpu-helper:

Hannes Duerr (4):
  create a debian package to make the installation of Nvidia vGPU
drivers more convenient
  debian/control: add dependency for helper script
  add pve-nvidia-vgpu-helper and Makefile to make dependency
installation more convenient
  debian: add and install pve-nvidia-sriov systemd template unit file


pve-manager:

Hannes Duerr (1):
  debian/control: add pve-nvidia-vgpu-helper as dependency

 debian/control | 1 +
 1 file changed, 1 insertion(+)


Summary over all repositories:
  1 files changed, 1 insertions(+), 0 deletions(-)

-- 
Generated by git-murpp 0.8.0


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v3 4/4] debian: add and install pve-nvidia-sriov systemd template unit file

2025-02-10 Thread Hannes Duerr
SR-IOV must be enabled each time the system is restarted.
This systemd service should take over this task and enable SR-IOV per
pci-id/gpu after a system restart.

Signed-off-by: Hannes Duerr 
---

Notes:
Changes in v4:
* Change nvidia-vgpud.service nvidia-vgpu-mgr.service to `Before=`
  targets and remove the 5 seconds sleep in `ExecStartPre=` because it
  is not needed anymore

 debian/pve-nvidia-sriov@.service | 12 
 debian/rules |  3 +++
 2 files changed, 15 insertions(+)
 create mode 100644 debian/pve-nvidia-sriov@.service

diff --git a/debian/pve-nvidia-sriov@.service b/debian/pve-nvidia-sriov@.service
new file mode 100644
index 000..f2e4c83
--- /dev/null
+++ b/debian/pve-nvidia-sriov@.service
@@ -0,0 +1,12 @@
+[Unit]
+Description=Enable NVIDIA SR-IOV for PCI ID %i
+ConditionPathExists=/usr/lib/nvidia/sriov-manage
+After=network.target 
+Before=pve-guests.service nvidia-vgpud.service nvidia-vgpu-mgr.service
+
+[Service]
+Type=oneshot
+ExecStart=/usr/lib/nvidia/sriov-manage -e %i
+
+[Install]
+WantedBy=multi-user.target
diff --git a/debian/rules b/debian/rules
index 218df65..d5fe1f6 100755
--- a/debian/rules
+++ b/debian/rules
@@ -6,3 +6,6 @@
 
 %:
dh $@
+
+override_dh_installsystemd:
+   dh_installsystemd --no-start --no-enable --name pve-nvidia-sriov@ 
pve-nvidia-sriov@.service
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v3 3/4] add pve-nvidia-vgpu-helper and Makefile to make dependency installation more convenient

2025-02-10 Thread Hannes Duerr
We add the pve-nvidia-vgpu-helper script to simplify the installation of
the required Nvidia Vgpu driver dependencies.
The script performs the following tasks
- install consistent dependencies
- check the currently running kernel and install the necessary kernel
  headers for the running kernel and any newer kernels installed
- blacklist the competing nouveau driver and add the opt-out flag
  --no-blacklist
We also add a Makefile to help build the Debian package and install the
script

Signed-off-by: Hannes Duerr 
---

Notes:
Changes in V4:
* add `--help` option displaying the usage
* as suggested by @Dominik we squash the patch, to only install headers
  for running kernel version and newer ones, into this one
* install `proxmox-headers-$major.$minor-pve` package so that the
  headers for future updates are also installed directly
* blacklist the nouveau driver by default and add opt-out flag
  `--no-blacklist`

 Makefile   | 54 +++
 pve-nvidia-vgpu-helper | 97 ++
 2 files changed, 151 insertions(+)
 create mode 100644 Makefile
 create mode 100755 pve-nvidia-vgpu-helper

diff --git a/Makefile b/Makefile
new file mode 100644
index 000..c6e461d
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,54 @@
+include /usr/share/dpkg/default.mk
+
+PACKAGE=pve-nvidia-vgpu-helper
+
+BINDIR=/usr/bin/
+DESTDIR=
+
+GITVERSION:=$(shell git rev-parse HEAD)
+
+BUILDDIR ?= $(PACKAGE)-$(DEB_VERSION)
+DSC=$(PACKAGE)_$(DEB_VERSION).dsc
+
+DEB=$(PACKAGE)_$(DEB_VERSION_UPSTREAM_REVISION)_all.deb
+
+all:
+deb: $(DEB)
+
+$(BUILDDIR): debian
+   rm -rf $@ $@.tmp
+   rsync -a * $@.tmp/
+   echo "git clone 
git://git.proxmox.com/git/pve-nvidia-vgpu-helper.git\\ngit checkout 
$(GITVERSION)" > $@.tmp/debian/SOURCE
+   mv $@.tmp $@
+
+$(DEB): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -b -uc -us
+   lintian $(DEB)
+
+dsc: $(DSC)
+   $(MAKE) clean
+   $(MAKE) $(DSC)
+   lintian $(DSC)
+
+$(DSC): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -S -uc -us
+
+sbuild: $(DSC)
+   sbuild $(DSC)
+
+.PHONY: install
+install: pve-nvidia-vgpu-helper
+   install -d $(DESTDIR)$(BINDIR)
+   install -m 0755 pve-nvidia-vgpu-helper $(DESTDIR)$(BINDIR)
+
+.PHONY: upload
+upload: UPLOAD_DIST ?= $(DEB_DISTRIBUTION)
+upload: $(DEB)
+   tar cf - $(DEB)|ssh repo...@repo.proxmox.com -- upload --product pve 
--dist $(UPLOAD_DIST)
+
+.PHONY: distclean
+distclean: clean
+
+.PHONY: clean
+clean:
+   rm -rf *~ $(PACKAGE)-[0-9]*/ $(PACKAGE)*.tar.* *.deb *.dsc *.changes 
*.build *.buildinfo
diff --git a/pve-nvidia-vgpu-helper b/pve-nvidia-vgpu-helper
new file mode 100755
index 000..8921c6f
--- /dev/null
+++ b/pve-nvidia-vgpu-helper
@@ -0,0 +1,97 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use PVE::Tools qw(run_command);
+use PVE::SysFSTools;
+
+use AptPkg::Cache;
+use Dpkg::Version;
+use Getopt::Long;
+
+my @apt_install = qw(apt-get --no-install-recommends install --);
+my @dependencies = qw(dkms libc6-dev proxmox-default-headers);
+my @missing_packages;
+
+die "Please execute the script with root privileges\n" if $>;
+
+my $apt_cache = AptPkg::Cache->new();
+die "unable to initialize AptPkg::Cache\n" if !$apt_cache; 
+
+GetOptions('no-blacklist' => \my $no_blacklist, 'help' => \my $help);
+
+if (defined($help)) {
+print("USAGE:\tpve-nvidia-vgpu-helper [OPTIONS]\n");
+print("\t --help\n");
+print("\t --no-blacklist\n");
+exit;
+}
+
+if (!defined($no_blacklist) && !-e "/etc/modprobe.d/block-nouveau.conf") {
+run_command(["mkdir", "-p", "/etc/modprobe.d/"]);
+PVE::SysFSTools::file_write( "/etc/modprobe.d/block-nouveau.conf",
+"blacklist nouveau" )
+  || die "Could not create block-nouveau.conf";
+
+run_command(["update-initramfs", "-u", "-k", "all"]);
+}
+
+sub package_is_installed {
+my ($package) = @_;
+my $p = $apt_cache->{$package};
+if (!defined($p->{CurrentState}) || $p->{CurrentState} ne "Installed") {
+   push(@missing_packages, $package);
+}
+}
+
+sub install_newer_headers {
+my (%installed_versions) = @_;
+for my $version (keys(%installed_versions)) {
+   # install header for the running kernel and newer kernel versions
+   package_is_installed("proxmox-headers-$version");
+}
+}
+
+foreach my $dependency (@dependencies) {
+package_is_installed($dependency);
+}
+
+
+my $running_kernel;
+run_command( ['/usr/bin/uname', '-r' ],
+outfunc => sub { $running_kernel = shift } );
+
+if ($running_kernel =~ m/^(\d+\.\d+\.\d+-\d+)-pve$/) {
+print "You are running the proxmox kernel 
`proxmox-kernel-$running_kernel`\n";
+$runn

[pve-devel] [PATCH pve-nvidia-vgpu-helper v4 2/4] debian/control: add dependency for helper script

2025-02-10 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/control | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/debian/control b/debian/control
index 334bf25..352e63a 100644
--- a/debian/control
+++ b/debian/control
@@ -8,7 +8,9 @@ Homepage: https://www.proxmox.com
 
 Package: pve-nvidia-vgpu-helper
 Architecture: all
-Depends: ${misc:Depends},
+Depends: libapt-pkg-perl,
+ libdpkg-perl,
+ ${misc:Depends},
 Description: Proxmox Nvidia vGPU helper script and systemd service
  This package provides a script, that helps with installing all required
  packages for the Nvidia vGPU driver, and also a systemd template service which
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v4 4/4] debian: add and install pve-nvidia-sriov systemd template unit file

2025-02-10 Thread Hannes Duerr
SR-IOV must be enabled each time the system is restarted.
This systemd service should take over this task and enable SR-IOV per
pci-id/gpu after a system restart.

Signed-off-by: Hannes Duerr 
---

Notes:
Changes in v4:
* Change nvidia-vgpud.service nvidia-vgpu-mgr.service to `Before=`
  targets and remove the 5 seconds sleep in `ExecStartPre=` because it
  is not needed anymore

 debian/pve-nvidia-sriov@.service | 12 
 debian/rules |  3 +++
 2 files changed, 15 insertions(+)
 create mode 100644 debian/pve-nvidia-sriov@.service

diff --git a/debian/pve-nvidia-sriov@.service b/debian/pve-nvidia-sriov@.service
new file mode 100644
index 000..f2e4c83
--- /dev/null
+++ b/debian/pve-nvidia-sriov@.service
@@ -0,0 +1,12 @@
+[Unit]
+Description=Enable NVIDIA SR-IOV for PCI ID %i
+ConditionPathExists=/usr/lib/nvidia/sriov-manage
+After=network.target 
+Before=pve-guests.service nvidia-vgpud.service nvidia-vgpu-mgr.service
+
+[Service]
+Type=oneshot
+ExecStart=/usr/lib/nvidia/sriov-manage -e %i
+
+[Install]
+WantedBy=multi-user.target
diff --git a/debian/rules b/debian/rules
index 218df65..d5fe1f6 100755
--- a/debian/rules
+++ b/debian/rules
@@ -6,3 +6,6 @@
 
 %:
dh $@
+
+override_dh_installsystemd:
+   dh_installsystemd --no-start --no-enable --name pve-nvidia-sriov@ 
pve-nvidia-sriov@.service
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-manager v3 1/1] debian/control: add pve-nvidia-vgpu-helper as dependency

2025-02-10 Thread Hannes Duerr
the package ships a script that helps to set up Nvidia vgpu drivers.

Signed-off-by: Hannes Duerr 
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index 6c94df09..ab02fd76 100644
--- a/debian/control
+++ b/debian/control
@@ -89,6 +89,7 @@ Depends: apt (>= 1.5~),
  pve-firewall,
  pve-ha-manager,
  pve-i18n (>= 3.2.0~),
+ pve-nvidia-vgpu-helper,
  pve-xtermjs (>= 4.7.0-1),
  qemu-server (>= 8.2.7),
  rsync,
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v4 1/4] create a debian package to make the installation of Nvidia vGPU drivers more convenient

2025-02-10 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/changelog |  5 +
 debian/control   | 15 +++
 debian/copyright | 14 ++
 debian/rules |  8 
 debian/source/format |  1 +
 5 files changed, 43 insertions(+)
 create mode 100644 debian/changelog
 create mode 100644 debian/control
 create mode 100644 debian/copyright
 create mode 100755 debian/rules
 create mode 100644 debian/source/format

diff --git a/debian/changelog b/debian/changelog
new file mode 100644
index 000..de5e10a
--- /dev/null
+++ b/debian/changelog
@@ -0,0 +1,5 @@
+pve-nvidia-vgpu-helper (8.3.3) UNRELEASED; urgency=medium
+
+  * Initial release.
+
+ -- Proxmox Support Team   Mon, 20 Jan 2025 17:02:52 +0100
diff --git a/debian/control b/debian/control
new file mode 100644
index 000..334bf25
--- /dev/null
+++ b/debian/control
@@ -0,0 +1,15 @@
+Source: pve-nvidia-vgpu-helper
+Section: admin
+Priority: optional
+Maintainer: Proxmox Support Team 
+Build-Depends: debhelper-compat (= 13), lintian,
+Standards-Version: 4.6.2
+Homepage: https://www.proxmox.com
+
+Package: pve-nvidia-vgpu-helper
+Architecture: all
+Depends: ${misc:Depends},
+Description: Proxmox Nvidia vGPU helper script and systemd service
+ This package provides a script, that helps with installing all required
+ packages for the Nvidia vGPU driver, and also a systemd template service which
+ configures the option SRI-OV per pci-id
diff --git a/debian/copyright b/debian/copyright
new file mode 100644
index 000..046356b
--- /dev/null
+++ b/debian/copyright
@@ -0,0 +1,14 @@
+Copyright (C) 2016 - 2024 Proxmox Server Solutions GmbH 
+
+   This program is free software: you can redistribute it and/or modify
+   it under the terms of the GNU Affero General Public License as
+   published by the Free Software Foundation, either version 3 of the
+   License, or (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU Affero General Public License for more details.
+
+   You should have received a copy of the GNU Affero General Public License
+   along with this program.  If not, see <https://www.gnu.org/licenses/>.
diff --git a/debian/rules b/debian/rules
new file mode 100755
index 000..218df65
--- /dev/null
+++ b/debian/rules
@@ -0,0 +1,8 @@
+#!/usr/bin/make -f
+# -*- makefile -*-
+
+# Uncomment this to turn on verbose mode.
+#export DH_VERBOSE=1
+
+%:
+   dh $@
diff --git a/debian/source/format b/debian/source/format
new file mode 100644
index 000..89ae9db
--- /dev/null
+++ b/debian/source/format
@@ -0,0 +1 @@
+3.0 (native)
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v4 3/4] add pve-nvidia-vgpu-helper and Makefile to make dependency installation more convenient

2025-02-10 Thread Hannes Duerr
We add the pve-nvidia-vgpu-helper script to simplify the installation of
the required Nvidia Vgpu driver dependencies.
The script performs the following tasks
- install consistent dependencies
- check the currently running kernel and install the necessary kernel
  headers for the running kernel and any newer kernels installed
- blacklist the competing nouveau driver and add the opt-out flag
  --no-blacklist
We also add a Makefile to help build the Debian package and install the
script

Signed-off-by: Hannes Duerr 
---

Notes:
Changes in V4:
* add `--help` option displaying the usage
* as suggested by @Dominik we squash the patch, to only install headers
  for running kernel version and newer ones, into this one
* install `proxmox-headers-$major.$minor-pve` package so that the
  headers for future updates are also installed directly
* blacklist the nouveau driver by default and add opt-out flag
  `--no-blacklist`

 Makefile   | 54 +++
 pve-nvidia-vgpu-helper | 97 ++
 2 files changed, 151 insertions(+)
 create mode 100644 Makefile
 create mode 100755 pve-nvidia-vgpu-helper

diff --git a/Makefile b/Makefile
new file mode 100644
index 000..c6e461d
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,54 @@
+include /usr/share/dpkg/default.mk
+
+PACKAGE=pve-nvidia-vgpu-helper
+
+BINDIR=/usr/bin/
+DESTDIR=
+
+GITVERSION:=$(shell git rev-parse HEAD)
+
+BUILDDIR ?= $(PACKAGE)-$(DEB_VERSION)
+DSC=$(PACKAGE)_$(DEB_VERSION).dsc
+
+DEB=$(PACKAGE)_$(DEB_VERSION_UPSTREAM_REVISION)_all.deb
+
+all:
+deb: $(DEB)
+
+$(BUILDDIR): debian
+   rm -rf $@ $@.tmp
+   rsync -a * $@.tmp/
+   echo "git clone 
git://git.proxmox.com/git/pve-nvidia-vgpu-helper.git\\ngit checkout 
$(GITVERSION)" > $@.tmp/debian/SOURCE
+   mv $@.tmp $@
+
+$(DEB): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -b -uc -us
+   lintian $(DEB)
+
+dsc: $(DSC)
+   $(MAKE) clean
+   $(MAKE) $(DSC)
+   lintian $(DSC)
+
+$(DSC): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -S -uc -us
+
+sbuild: $(DSC)
+   sbuild $(DSC)
+
+.PHONY: install
+install: pve-nvidia-vgpu-helper
+   install -d $(DESTDIR)$(BINDIR)
+   install -m 0755 pve-nvidia-vgpu-helper $(DESTDIR)$(BINDIR)
+
+.PHONY: upload
+upload: UPLOAD_DIST ?= $(DEB_DISTRIBUTION)
+upload: $(DEB)
+   tar cf - $(DEB)|ssh repo...@repo.proxmox.com -- upload --product pve 
--dist $(UPLOAD_DIST)
+
+.PHONY: distclean
+distclean: clean
+
+.PHONY: clean
+clean:
+   rm -rf *~ $(PACKAGE)-[0-9]*/ $(PACKAGE)*.tar.* *.deb *.dsc *.changes 
*.build *.buildinfo
diff --git a/pve-nvidia-vgpu-helper b/pve-nvidia-vgpu-helper
new file mode 100755
index 000..8921c6f
--- /dev/null
+++ b/pve-nvidia-vgpu-helper
@@ -0,0 +1,97 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use PVE::Tools qw(run_command);
+use PVE::SysFSTools;
+
+use AptPkg::Cache;
+use Dpkg::Version;
+use Getopt::Long;
+
+my @apt_install = qw(apt-get --no-install-recommends install --);
+my @dependencies = qw(dkms libc6-dev proxmox-default-headers);
+my @missing_packages;
+
+die "Please execute the script with root privileges\n" if $>;
+
+my $apt_cache = AptPkg::Cache->new();
+die "unable to initialize AptPkg::Cache\n" if !$apt_cache; 
+
+GetOptions('no-blacklist' => \my $no_blacklist, 'help' => \my $help);
+
+if (defined($help)) {
+print("USAGE:\tpve-nvidia-vgpu-helper [OPTIONS]\n");
+print("\t --help\n");
+print("\t --no-blacklist\n");
+exit;
+}
+
+if (!defined($no_blacklist) && !-e "/etc/modprobe.d/block-nouveau.conf") {
+run_command(["mkdir", "-p", "/etc/modprobe.d/"]);
+PVE::SysFSTools::file_write( "/etc/modprobe.d/block-nouveau.conf",
+"blacklist nouveau" )
+  || die "Could not create block-nouveau.conf";
+
+run_command(["update-initramfs", "-u", "-k", "all"]);
+}
+
+sub package_is_installed {
+my ($package) = @_;
+my $p = $apt_cache->{$package};
+if (!defined($p->{CurrentState}) || $p->{CurrentState} ne "Installed") {
+   push(@missing_packages, $package);
+}
+}
+
+sub install_newer_headers {
+my (%installed_versions) = @_;
+for my $version (keys(%installed_versions)) {
+   # install header for the running kernel and newer kernel versions
+   package_is_installed("proxmox-headers-$version");
+}
+}
+
+foreach my $dependency (@dependencies) {
+package_is_installed($dependency);
+}
+
+
+my $running_kernel;
+run_command( ['/usr/bin/uname', '-r' ],
+outfunc => sub { $running_kernel = shift } );
+
+if ($running_kernel =~ m/^(\d+\.\d+\.\d+-\d+)-pve$/) {
+print "You are running the proxmox kernel 
`proxmox-kernel-$running_kernel`\n";
+$runn

[pve-devel] [PATCH manager/nvidia-vgpu-helper v4 0/5] reduce setup steps for nvidia vgpu drivers

2025-02-10 Thread Hannes Duerr
Changes in v4:
in commits

Changes in v3:
* install headers for every installed kernel version by default
* additionally add patch to only install headers for running kernel
  version and newer ones, this requires the new dependency
  "libdpkg-perl"
* remove unnecessary intrusive "Dpkg:Options::=--force-confnew"
* rename systemd template unit to "pve-nvidia-sriov@.service"
* check if path "/usr/lib/nvidia/sriov-manage" exists in systemd
  template unit


Changes in v2:
* patches contain all changes to build new repository
* make pve-manager depend on this package instead of the other way around
* install the script to /usr/bin/
* rename the script to pve-nvidia-vgpu-helper because it is only
  relevant for PVE(the repository should therefore also be renamed
  when created)

The aim of the repository is to reduce the necessary installation
steps for the Nvidia VGPU drivers [0]. The package installs a script
which can be used to check and install necessary dependencies and a
systemd template service which can be used to configure the SR-IOV per
pci-id

Part of the changes would later be the adjustment of the wiki page

[0] https://pve.proxmox.com/wiki/NVIDIA_vGPU_on_Proxmox_VE

pve-nvidia-vgpu-helper:

Hannes Duerr (4):
  create a debian package to make the installation of Nvidia vGPU
drivers more convenient
  debian/control: add dependency for helper script
  add pve-nvidia-vgpu-helper and Makefile to make dependency
installation more convenient
  debian: add and install pve-nvidia-sriov systemd template unit file


pve-manager:

Hannes Duerr (1):
  debian/control: add pve-nvidia-vgpu-helper as dependency

 debian/control | 1 +
 1 file changed, 1 insertion(+)


Summary over all repositories:
  1 files changed, 1 insertions(+), 0 deletions(-)

-- 
Generated by git-murpp 0.8.0


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-manager v4 1/1] debian/control: add pve-nvidia-vgpu-helper as dependency

2025-02-10 Thread Hannes Duerr
the package ships a script that helps to set up Nvidia vgpu drivers.

Signed-off-by: Hannes Duerr 
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index 6c94df09..ab02fd76 100644
--- a/debian/control
+++ b/debian/control
@@ -89,6 +89,7 @@ Depends: apt (>= 1.5~),
  pve-firewall,
  pve-ha-manager,
  pve-i18n (>= 3.2.0~),
+ pve-nvidia-vgpu-helper,
  pve-xtermjs (>= 4.7.0-1),
  qemu-server (>= 8.2.7),
  rsync,
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-docs] ha crs: remove technology preview note of static-load scheduler

2024-12-13 Thread Hannes Duerr
The static load scheduler feature was applied on 17/11/2022 [0] and can
be considered stable now

[0] 
https://lore.proxmox.com/pve-devel/20221117140018.105004-1-f.eb...@proxmox.com/

Signed-off-by: Hannes Duerr 
---
 ha-manager.adoc | 2 --
 1 file changed, 2 deletions(-)

diff --git a/ha-manager.adoc b/ha-manager.adoc
index 3d6fc4a..666576d 100644
--- a/ha-manager.adoc
+++ b/ha-manager.adoc
@@ -1053,8 +1053,6 @@ Non-HA-managed services are currently not counted.
 Static-Load Scheduler
 ~
 
-IMPORTANT: The static mode is still a technology preview.
-
 Static usage information from HA services on each node is used to choose a
 recovery node. Usage of non-HA-managed services is currently not considered.
 
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager/nvidia-vgpu-helper v2 0/5] reduce setup steps for nvidia vgpu drivers

2025-01-21 Thread Hannes Duerr
Changes in v2:
* patches contain all changes to build new repository
* make pve-manager depend on this package instead of the other way around
* install the script to /usr/bin/
* rename the script to pve-nvidia-vgpu-helper because it is only
  relevant for PVE(the repository should therefore also be renamed
  when created)

The aim of the repository is to reduce the necessary installation
steps for the Nvidia VGPU drivers [0]. The package installs a script
which can be used to check and install necessary dependencies and a
systemd template service which can be used to configure the SR-IOV per
pci-id

Part of the changes would later be the adjustment of the wiki page

[0] https://pve.proxmox.com/wiki/NVIDIA_vGPU_on_Proxmox_VE

pve-nvidia-vgpu-helper:

Hannes Duerr (4):
  create a debian package to make the installation of Nvidia vGPU
drivers more convenient
  debian/control: add dependency for helper script
  add pve-nvidia-vgpu-helper and Makefile to make dependency installtion
more convenient
  debian: add and install nvidia-vgpu systemd template unit file


pve-manager:

Hannes Duerr (1):
  debian/control: add pve-nvidia-vgpu-helper as dependency

 debian/control | 1 +
 1 file changed, 1 insertion(+)


Summary over all repositories:
  1 files changed, 1 insertions(+), 0 deletions(-)

-- 
Generated by git-murpp 0.8.0


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v2 1/4] create a debian package to make the installation of Nvidia vGPU drivers more convenient

2025-01-21 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/changelog |  5 +
 debian/control   | 15 +++
 debian/copyright | 14 ++
 debian/rules |  8 
 debian/source/format |  1 +
 5 files changed, 43 insertions(+)
 create mode 100644 debian/changelog
 create mode 100644 debian/control
 create mode 100644 debian/copyright
 create mode 100755 debian/rules
 create mode 100644 debian/source/format

diff --git a/debian/changelog b/debian/changelog
new file mode 100644
index 000..de5e10a
--- /dev/null
+++ b/debian/changelog
@@ -0,0 +1,5 @@
+pve-nvidia-vgpu-helper (8.3.3) UNRELEASED; urgency=medium
+
+  * Initial release.
+
+ -- Proxmox Support Team   Mon, 20 Jan 2025 17:02:52 +0100
diff --git a/debian/control b/debian/control
new file mode 100644
index 000..334bf25
--- /dev/null
+++ b/debian/control
@@ -0,0 +1,15 @@
+Source: pve-nvidia-vgpu-helper
+Section: admin
+Priority: optional
+Maintainer: Proxmox Support Team 
+Build-Depends: debhelper-compat (= 13), lintian,
+Standards-Version: 4.6.2
+Homepage: https://www.proxmox.com
+
+Package: pve-nvidia-vgpu-helper
+Architecture: all
+Depends: ${misc:Depends},
+Description: Proxmox Nvidia vGPU helper script and systemd service
+ This package provides a script, that helps with installing all required
+ packages for the Nvidia vGPU driver, and also a systemd template service which
+ configures the option SRI-OV per pci-id
diff --git a/debian/copyright b/debian/copyright
new file mode 100644
index 000..046356b
--- /dev/null
+++ b/debian/copyright
@@ -0,0 +1,14 @@
+Copyright (C) 2016 - 2024 Proxmox Server Solutions GmbH 
+
+   This program is free software: you can redistribute it and/or modify
+   it under the terms of the GNU Affero General Public License as
+   published by the Free Software Foundation, either version 3 of the
+   License, or (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU Affero General Public License for more details.
+
+   You should have received a copy of the GNU Affero General Public License
+   along with this program.  If not, see <https://www.gnu.org/licenses/>.
diff --git a/debian/rules b/debian/rules
new file mode 100755
index 000..218df65
--- /dev/null
+++ b/debian/rules
@@ -0,0 +1,8 @@
+#!/usr/bin/make -f
+# -*- makefile -*-
+
+# Uncomment this to turn on verbose mode.
+#export DH_VERBOSE=1
+
+%:
+   dh $@
diff --git a/debian/source/format b/debian/source/format
new file mode 100644
index 000..89ae9db
--- /dev/null
+++ b/debian/source/format
@@ -0,0 +1 @@
+3.0 (native)
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v2 3/4] add pve-nvidia-vgpu-helper and Makefile to make dependency installtion more convenient

2025-01-21 Thread Hannes Duerr
we add the pve-nvidia-vgpu-helper script to make the installation of the
required nvidia vgpu driver dependencies more convenient.
We also add a Makefile to assist in building the debian package and
installing the script

Signed-off-by: Hannes Duerr 
---
 Makefile   | 54 ++
 pve-nvidia-vgpu-helper | 66 ++
 2 files changed, 120 insertions(+)
 create mode 100644 Makefile
 create mode 100755 pve-nvidia-vgpu-helper

diff --git a/Makefile b/Makefile
new file mode 100644
index 000..c6e461d
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,54 @@
+include /usr/share/dpkg/default.mk
+
+PACKAGE=pve-nvidia-vgpu-helper
+
+BINDIR=/usr/bin/
+DESTDIR=
+
+GITVERSION:=$(shell git rev-parse HEAD)
+
+BUILDDIR ?= $(PACKAGE)-$(DEB_VERSION)
+DSC=$(PACKAGE)_$(DEB_VERSION).dsc
+
+DEB=$(PACKAGE)_$(DEB_VERSION_UPSTREAM_REVISION)_all.deb
+
+all:
+deb: $(DEB)
+
+$(BUILDDIR): debian
+   rm -rf $@ $@.tmp
+   rsync -a * $@.tmp/
+   echo "git clone 
git://git.proxmox.com/git/pve-nvidia-vgpu-helper.git\\ngit checkout 
$(GITVERSION)" > $@.tmp/debian/SOURCE
+   mv $@.tmp $@
+
+$(DEB): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -b -uc -us
+   lintian $(DEB)
+
+dsc: $(DSC)
+   $(MAKE) clean
+   $(MAKE) $(DSC)
+   lintian $(DSC)
+
+$(DSC): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -S -uc -us
+
+sbuild: $(DSC)
+   sbuild $(DSC)
+
+.PHONY: install
+install: pve-nvidia-vgpu-helper
+   install -d $(DESTDIR)$(BINDIR)
+   install -m 0755 pve-nvidia-vgpu-helper $(DESTDIR)$(BINDIR)
+
+.PHONY: upload
+upload: UPLOAD_DIST ?= $(DEB_DISTRIBUTION)
+upload: $(DEB)
+   tar cf - $(DEB)|ssh repo...@repo.proxmox.com -- upload --product pve 
--dist $(UPLOAD_DIST)
+
+.PHONY: distclean
+distclean: clean
+
+.PHONY: clean
+clean:
+   rm -rf *~ $(PACKAGE)-[0-9]*/ $(PACKAGE)*.tar.* *.deb *.dsc *.changes 
*.build *.buildinfo
diff --git a/pve-nvidia-vgpu-helper b/pve-nvidia-vgpu-helper
new file mode 100755
index 000..fc0856e
--- /dev/null
+++ b/pve-nvidia-vgpu-helper
@@ -0,0 +1,66 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use PVE::Tools qw(run_command);
+use AptPkg::Cache;
+
+my @apt_install = qw(apt-get --no-install-recommends -o 
Dpkg:Options::=--force-confnew install --);
+my @dependencies = qw(dkms libc6-dev);
+my @missing_packages;
+
+die "Please execute the script with root privileges\n" if $>;
+
+my $apt_cache = AptPkg::Cache->new();
+die "unable to initialize AptPkg::Cache\n" if !$apt_cache; 
+
+sub package_is_installed {
+my ($package) = @_;
+my $p = $apt_cache->{$package};
+if (!defined($p->{CurrentState}) || $p->{CurrentState} ne "Installed") {
+   push(@missing_packages, $package);
+}
+}
+
+foreach my $dependency (@dependencies) {
+package_is_installed($dependency);
+}
+
+
+my $running_kernel;
+run_command( ['/usr/bin/uname', '-r' ],
+outfunc => sub { $running_kernel = shift } );
+
+my $default_major_minor_version;
+run_command(['/usr/bin/dpkg-query', '-f', '${Depends}', '-W', 
'proxmox-default-kernel'],
+outfunc => sub { $default_major_minor_version = shift } );
+
+my $default_full_version;
+run_command(['/usr/bin/dpkg-query', '-f', '${Version}', '-W', 
$default_major_minor_version],
+outfunc => sub { $default_full_version = shift } );
+
+if ($running_kernel =~ /$default_full_version-pve/) {
+print "You are running the proxmox default kernel 
`proxmox-kernel-$running_kernel`\n";
+package_is_installed("proxmox-default-headers");
+} elsif ($running_kernel =~ /pve/) {
+print "You are running the non default proxmox kernel 
`proxmox-kernel-$running_kernel`\n";
+package_is_installed("proxmox-headers-$running_kernel");
+} else {
+die "You are not using a proxmox-kernel, please make sure that the 
appropriate header package is installed.\n";
+}
+
+if (!@missing_packages){
+print "All required packages are installed, you can continue with the 
Nvidia vGPU driver installation.\n";
+exit;
+} else {
+print "The following packages are missing:\n" . join("\n", 
@missing_packages) ."\n";
+print "Would you like to install them now (y/n)?\n";
+}
+
+my $answer = ;
+if (defined($answer) && $answer =~ m/^\s*y(?:es)?\s*$/i) {
+if (system(@apt_install, @missing_packages) != 0) {
+   die "apt failed during the installation: ($?)\n";
+}
+}
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v2 2/4] debian/control: add dependency for helper script

2025-01-21 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/control | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/debian/control b/debian/control
index 334bf25..4492b60 100644
--- a/debian/control
+++ b/debian/control
@@ -8,7 +8,8 @@ Homepage: https://www.proxmox.com
 
 Package: pve-nvidia-vgpu-helper
 Architecture: all
-Depends: ${misc:Depends},
+Depends: libapt-pkg-perl,
+ ${misc:Depends},
 Description: Proxmox Nvidia vGPU helper script and systemd service
  This package provides a script, that helps with installing all required
  packages for the Nvidia vGPU driver, and also a systemd template service which
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v2 4/4] debian: add and install nvidia-vgpu systemd template unit file

2025-01-21 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/nvidia-vgpud@.service | 12 
 debian/rules |  3 +++
 2 files changed, 15 insertions(+)
 create mode 100644 debian/nvidia-vgpud@.service

diff --git a/debian/nvidia-vgpud@.service b/debian/nvidia-vgpud@.service
new file mode 100644
index 000..b3c1220
--- /dev/null
+++ b/debian/nvidia-vgpud@.service
@@ -0,0 +1,12 @@
+[Unit]
+Description=Enable NVIDIA SR-IOV for PCI ID %i
+After=network.target nvidia-vgpud.service nvidia-vgpu-mgr.service
+Before=pve-guests.service
+
+[Service]
+Type=oneshot
+ExecStartPre=/bin/sleep 5
+ExecStart=/usr/lib/nvidia/sriov-manage -e %i
+
+[Install]
+WantedBy=multi-user.target
diff --git a/debian/rules b/debian/rules
index 218df65..fe9a05d 100755
--- a/debian/rules
+++ b/debian/rules
@@ -6,3 +6,6 @@
 
 %:
dh $@
+
+override_dh_installsystemd:
+   dh_installsystemd --no-start --no-enable --name nvidia-vgpud@ 
nvidia-vgpud@.service
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-manager v2 1/1] debian/control: add pve-nvidia-vgpu-helper as dependency

2025-01-21 Thread Hannes Duerr
the package ships a script that helps to set up Nvidia vgpu drivers.

Signed-off-by: Hannes Duerr 
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index 6c94df09..ab02fd76 100644
--- a/debian/control
+++ b/debian/control
@@ -89,6 +89,7 @@ Depends: apt (>= 1.5~),
  pve-firewall,
  pve-ha-manager,
  pve-i18n (>= 3.2.0~),
+ pve-nvidia-vgpu-helper,
  pve-xtermjs (>= 4.7.0-1),
  qemu-server (>= 8.2.7),
  rsync,
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v2 5/5] script: install headers for running and newer kernel version only

2025-01-24 Thread Hannes Duerr
also add the new dependency for `libdpkg-perl`

Signed-off-by: Hannes Duerr 
---
 debian/control |  1 +
 pve-nvidia-vgpu-helper | 20 
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/debian/control b/debian/control
index 4492b60..352e63a 100644
--- a/debian/control
+++ b/debian/control
@@ -9,6 +9,7 @@ Homepage: https://www.proxmox.com
 Package: pve-nvidia-vgpu-helper
 Architecture: all
 Depends: libapt-pkg-perl,
+ libdpkg-perl,
  ${misc:Depends},
 Description: Proxmox Nvidia vGPU helper script and systemd service
  This package provides a script, that helps with installing all required
diff --git a/pve-nvidia-vgpu-helper b/pve-nvidia-vgpu-helper
index 885b879..4c57578 100755
--- a/pve-nvidia-vgpu-helper
+++ b/pve-nvidia-vgpu-helper
@@ -5,6 +5,7 @@ use warnings;
 
 use PVE::Tools qw(run_command);
 use AptPkg::Cache;
+use Dpkg::Version;
 
 my @apt_install = qw(apt-get --no-install-recommends install --);
 my @dependencies = qw(dkms libc6-dev proxmox-default-headers);
@@ -23,6 +24,16 @@ sub package_is_installed {
 }
 }
 
+sub install_newer_headers {
+my ($running_version, @installed_versions) = @_;
+for my $version (@installed_versions) {
+   # install header for the running kernel and newer kernel versions
+   if (Dpkg::Version::version_compare($running_version, $version) != 1){
+   package_is_installed("proxmox-headers-$version-pve");
+   }
+}
+}
+
 foreach my $dependency (@dependencies) {
 package_is_installed($dependency);
 }
@@ -32,17 +43,18 @@ my $running_kernel;
 run_command( ['/usr/bin/uname', '-r' ],
 outfunc => sub { $running_kernel = shift } );
 
+my @installed_versions;
 run_command(['/usr/bin/dpkg-query', '-W', 'proxmox-kernel-*-pve'],
 outfunc => sub {
my $installed_kernel = shift;
-   $installed_kernel =~ 
s/^\s*proxmox-kernel(-\d+.\d+.\d+-\d+-pve)\s*$/proxmox-headers$1/;
-   package_is_installed($installed_kernel);
+   $installed_kernel =~ m/^\s*proxmox-kernel-(\d+.\d+.\d+-\d+)-pve\s*$/;
+   push(@installed_versions, $1);
 });
 
 
-
-if ($running_kernel =~ m/^\d+\.\d+\.\d+-\d+-pve$/) {
+if ($running_kernel =~ m/^(\d+\.\d+\.\d+-\d+)-pve$/) {
 print "You are running the proxmox kernel 
`proxmox-kernel-$running_kernel`\n";
+install_newer_headers($1, @installed_versions);
 } else {
 die "You are not using a proxmox-kernel, please make sure that the 
appropriate header package is installed.\n";
 }
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager/nvidia-vgpu-helper v2 0/6] reduce setup steps for nvidia vgpu drivers

2025-01-24 Thread Hannes Duerr
Changes in v3:
* install headers for every installed kernel version by default
* additionally add patch to only install headers for running kernel
  version and newer ones, this requires the new dependency
  "libdpkg-perl"
* remove unnecessary intrusive "Dpkg:Options::=--force-confnew"
* rename systemd template unit to "pve-nvidia-sriov@.service"
* check if path "/usr/lib/nvidia/sriov-manage" exists in systemd
  template unit


Changes in v2:
* patches contain all changes to build new repository
* make pve-manager depend on this package instead of the other way around
* install the script to /usr/bin/
* rename the script to pve-nvidia-vgpu-helper because it is only
  relevant for PVE(the repository should therefore also be renamed
  when created)

The aim of the repository is to reduce the necessary installation
steps for the Nvidia VGPU drivers [0]. The package installs a script
which can be used to check and install necessary dependencies and a
systemd template service which can be used to configure the SR-IOV per
pci-id

Part of the changes would later be the adjustment of the wiki page

[0] https://pve.proxmox.com/wiki/NVIDIA_vGPU_on_Proxmox_VE

pve-nvidia-vgpu-helper:

Hannes Duerr (5):
  create a debian package to make the installation of Nvidia vGPU
drivers more convenient
  debian/control: add dependency for helper script
  add pve-nvidia-vgpu-helper and Makefile to make dependency installtion
more convenient
  debian: add and install pve-nvidia-sriov systemd template unit file
  script: install headers for running and newer kernel version only


pve-manager:

Hannes Duerr (1):
  debian/control: add pve-nvidia-vgpu-helper as dependency

 debian/control | 1 +
 1 file changed, 1 insertion(+)


Summary over all repositories:
  1 files changed, 1 insertions(+), 0 deletions(-)

-- 
Generated by git-murpp 0.8.0


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v2 3/5] add pve-nvidia-vgpu-helper and Makefile to make dependency installtion more convenient

2025-01-24 Thread Hannes Duerr
we add the pve-nvidia-vgpu-helper script to make the installation of the
required nvidia vgpu driver dependencies more convenient.
We also add a Makefile to assist in building the debian package and
installing the script

Signed-off-by: Hannes Duerr 
---
 Makefile   | 54 
 pve-nvidia-vgpu-helper | 63 ++
 2 files changed, 117 insertions(+)
 create mode 100644 Makefile
 create mode 100755 pve-nvidia-vgpu-helper

diff --git a/Makefile b/Makefile
new file mode 100644
index 000..c6e461d
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,54 @@
+include /usr/share/dpkg/default.mk
+
+PACKAGE=pve-nvidia-vgpu-helper
+
+BINDIR=/usr/bin/
+DESTDIR=
+
+GITVERSION:=$(shell git rev-parse HEAD)
+
+BUILDDIR ?= $(PACKAGE)-$(DEB_VERSION)
+DSC=$(PACKAGE)_$(DEB_VERSION).dsc
+
+DEB=$(PACKAGE)_$(DEB_VERSION_UPSTREAM_REVISION)_all.deb
+
+all:
+deb: $(DEB)
+
+$(BUILDDIR): debian
+   rm -rf $@ $@.tmp
+   rsync -a * $@.tmp/
+   echo "git clone 
git://git.proxmox.com/git/pve-nvidia-vgpu-helper.git\\ngit checkout 
$(GITVERSION)" > $@.tmp/debian/SOURCE
+   mv $@.tmp $@
+
+$(DEB): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -b -uc -us
+   lintian $(DEB)
+
+dsc: $(DSC)
+   $(MAKE) clean
+   $(MAKE) $(DSC)
+   lintian $(DSC)
+
+$(DSC): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -S -uc -us
+
+sbuild: $(DSC)
+   sbuild $(DSC)
+
+.PHONY: install
+install: pve-nvidia-vgpu-helper
+   install -d $(DESTDIR)$(BINDIR)
+   install -m 0755 pve-nvidia-vgpu-helper $(DESTDIR)$(BINDIR)
+
+.PHONY: upload
+upload: UPLOAD_DIST ?= $(DEB_DISTRIBUTION)
+upload: $(DEB)
+   tar cf - $(DEB)|ssh repo...@repo.proxmox.com -- upload --product pve 
--dist $(UPLOAD_DIST)
+
+.PHONY: distclean
+distclean: clean
+
+.PHONY: clean
+clean:
+   rm -rf *~ $(PACKAGE)-[0-9]*/ $(PACKAGE)*.tar.* *.deb *.dsc *.changes 
*.build *.buildinfo
diff --git a/pve-nvidia-vgpu-helper b/pve-nvidia-vgpu-helper
new file mode 100755
index 000..885b879
--- /dev/null
+++ b/pve-nvidia-vgpu-helper
@@ -0,0 +1,63 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use PVE::Tools qw(run_command);
+use AptPkg::Cache;
+
+my @apt_install = qw(apt-get --no-install-recommends install --);
+my @dependencies = qw(dkms libc6-dev proxmox-default-headers);
+my @missing_packages;
+
+die "Please execute the script with root privileges\n" if $>;
+
+my $apt_cache = AptPkg::Cache->new();
+die "unable to initialize AptPkg::Cache\n" if !$apt_cache; 
+
+sub package_is_installed {
+my ($package) = @_;
+my $p = $apt_cache->{$package};
+if (!defined($p->{CurrentState}) || $p->{CurrentState} ne "Installed") {
+   push(@missing_packages, $package);
+}
+}
+
+foreach my $dependency (@dependencies) {
+package_is_installed($dependency);
+}
+
+
+my $running_kernel;
+run_command( ['/usr/bin/uname', '-r' ],
+outfunc => sub { $running_kernel = shift } );
+
+run_command(['/usr/bin/dpkg-query', '-W', 'proxmox-kernel-*-pve'],
+outfunc => sub {
+   my $installed_kernel = shift;
+   $installed_kernel =~ 
s/^\s*proxmox-kernel(-\d+.\d+.\d+-\d+-pve)\s*$/proxmox-headers$1/;
+   package_is_installed($installed_kernel);
+});
+
+
+
+if ($running_kernel =~ m/^\d+\.\d+\.\d+-\d+-pve$/) {
+print "You are running the proxmox kernel 
`proxmox-kernel-$running_kernel`\n";
+} else {
+die "You are not using a proxmox-kernel, please make sure that the 
appropriate header package is installed.\n";
+}
+
+if (!@missing_packages){
+print "All required packages are installed, you can continue with the 
Nvidia vGPU driver installation.\n";
+exit;
+} else {
+print "The following packages are missing:\n" . join("\n", 
@missing_packages) ."\n";
+print "Would you like to install them now (y/n)?\n";
+}
+
+my $answer = ;
+if (defined($answer) && $answer =~ m/^\s*y(?:es)?\s*$/i) {
+if (system(@apt_install, @missing_packages) != 0) {
+   die "apt failed during the installation: ($?)\n";
+}
+}
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v2 2/5] debian/control: add dependency for helper script

2025-01-24 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/control | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/debian/control b/debian/control
index 334bf25..4492b60 100644
--- a/debian/control
+++ b/debian/control
@@ -8,7 +8,8 @@ Homepage: https://www.proxmox.com
 
 Package: pve-nvidia-vgpu-helper
 Architecture: all
-Depends: ${misc:Depends},
+Depends: libapt-pkg-perl,
+ ${misc:Depends},
 Description: Proxmox Nvidia vGPU helper script and systemd service
  This package provides a script, that helps with installing all required
  packages for the Nvidia vGPU driver, and also a systemd template service which
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v2 4/5] debian: add and install pve-nvidia-sriov systemd template unit file

2025-01-24 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/pve-nvidia-sriov@.service | 13 +
 debian/rules |  3 +++
 2 files changed, 16 insertions(+)
 create mode 100644 debian/pve-nvidia-sriov@.service

diff --git a/debian/pve-nvidia-sriov@.service b/debian/pve-nvidia-sriov@.service
new file mode 100644
index 000..3706d04
--- /dev/null
+++ b/debian/pve-nvidia-sriov@.service
@@ -0,0 +1,13 @@
+[Unit]
+Description=Enable NVIDIA SR-IOV for PCI ID %i
+ConditionPathExists=/usr/lib/nvidia/sriov-manage
+After=network.target nvidia-vgpud.service nvidia-vgpu-mgr.service
+Before=pve-guests.service
+
+[Service]
+Type=oneshot
+ExecStartPre=/bin/sleep 5
+ExecStart=/usr/lib/nvidia/sriov-manage -e %i
+
+[Install]
+WantedBy=multi-user.target
diff --git a/debian/rules b/debian/rules
index 218df65..d5fe1f6 100755
--- a/debian/rules
+++ b/debian/rules
@@ -6,3 +6,6 @@
 
 %:
dh $@
+
+override_dh_installsystemd:
+   dh_installsystemd --no-start --no-enable --name pve-nvidia-sriov@ 
pve-nvidia-sriov@.service
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-manager v2 1/1] debian/control: add pve-nvidia-vgpu-helper as dependency

2025-01-24 Thread Hannes Duerr
the package ships a script that helps to set up Nvidia vgpu drivers.

Signed-off-by: Hannes Duerr 
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index 6c94df09..ab02fd76 100644
--- a/debian/control
+++ b/debian/control
@@ -89,6 +89,7 @@ Depends: apt (>= 1.5~),
  pve-firewall,
  pve-ha-manager,
  pve-i18n (>= 3.2.0~),
+ pve-nvidia-vgpu-helper,
  pve-xtermjs (>= 4.7.0-1),
  qemu-server (>= 8.2.7),
  rsync,
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v3 5/5] script: install headers for running and newer kernel version only

2025-01-24 Thread Hannes Duerr
also add the new dependency for `libdpkg-perl`

Signed-off-by: Hannes Duerr 
---
 debian/control |  1 +
 pve-nvidia-vgpu-helper | 20 
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/debian/control b/debian/control
index 4492b60..352e63a 100644
--- a/debian/control
+++ b/debian/control
@@ -9,6 +9,7 @@ Homepage: https://www.proxmox.com
 Package: pve-nvidia-vgpu-helper
 Architecture: all
 Depends: libapt-pkg-perl,
+ libdpkg-perl,
  ${misc:Depends},
 Description: Proxmox Nvidia vGPU helper script and systemd service
  This package provides a script, that helps with installing all required
diff --git a/pve-nvidia-vgpu-helper b/pve-nvidia-vgpu-helper
index 885b879..4c57578 100755
--- a/pve-nvidia-vgpu-helper
+++ b/pve-nvidia-vgpu-helper
@@ -5,6 +5,7 @@ use warnings;
 
 use PVE::Tools qw(run_command);
 use AptPkg::Cache;
+use Dpkg::Version;
 
 my @apt_install = qw(apt-get --no-install-recommends install --);
 my @dependencies = qw(dkms libc6-dev proxmox-default-headers);
@@ -23,6 +24,16 @@ sub package_is_installed {
 }
 }
 
+sub install_newer_headers {
+my ($running_version, @installed_versions) = @_;
+for my $version (@installed_versions) {
+   # install header for the running kernel and newer kernel versions
+   if (Dpkg::Version::version_compare($running_version, $version) != 1){
+   package_is_installed("proxmox-headers-$version-pve");
+   }
+}
+}
+
 foreach my $dependency (@dependencies) {
 package_is_installed($dependency);
 }
@@ -32,17 +43,18 @@ my $running_kernel;
 run_command( ['/usr/bin/uname', '-r' ],
 outfunc => sub { $running_kernel = shift } );
 
+my @installed_versions;
 run_command(['/usr/bin/dpkg-query', '-W', 'proxmox-kernel-*-pve'],
 outfunc => sub {
my $installed_kernel = shift;
-   $installed_kernel =~ 
s/^\s*proxmox-kernel(-\d+.\d+.\d+-\d+-pve)\s*$/proxmox-headers$1/;
-   package_is_installed($installed_kernel);
+   $installed_kernel =~ m/^\s*proxmox-kernel-(\d+.\d+.\d+-\d+)-pve\s*$/;
+   push(@installed_versions, $1);
 });
 
 
-
-if ($running_kernel =~ m/^\d+\.\d+\.\d+-\d+-pve$/) {
+if ($running_kernel =~ m/^(\d+\.\d+\.\d+-\d+)-pve$/) {
 print "You are running the proxmox kernel 
`proxmox-kernel-$running_kernel`\n";
+install_newer_headers($1, @installed_versions);
 } else {
 die "You are not using a proxmox-kernel, please make sure that the 
appropriate header package is installed.\n";
 }
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager/nvidia-vgpu-helper v3 0/6] reduce setup steps for nvidia vgpu drivers

2025-01-24 Thread Hannes Duerr
Changes in v3:
* install headers for every installed kernel version by default
* additionally add patch to only install headers for running kernel
  version and newer ones, this requires the new dependency
  "libdpkg-perl"
* remove unnecessary intrusive "Dpkg:Options::=--force-confnew"
* rename systemd template unit to "pve-nvidia-sriov@.service"
* check if path "/usr/lib/nvidia/sriov-manage" exists in systemd
  template unit


Changes in v2:
* patches contain all changes to build new repository
* make pve-manager depend on this package instead of the other way around
* install the script to /usr/bin/
* rename the script to pve-nvidia-vgpu-helper because it is only
  relevant for PVE(the repository should therefore also be renamed
  when created)

The aim of the repository is to reduce the necessary installation
steps for the Nvidia VGPU drivers [0]. The package installs a script
which can be used to check and install necessary dependencies and a
systemd template service which can be used to configure the SR-IOV per
pci-id

Part of the changes would later be the adjustment of the wiki page

[0] https://pve.proxmox.com/wiki/NVIDIA_vGPU_on_Proxmox_VE

pve-nvidia-vgpu-helper:

Hannes Duerr (5):
  create a debian package to make the installation of Nvidia vGPU
drivers more convenient
  debian/control: add dependency for helper script
  add pve-nvidia-vgpu-helper and Makefile to make dependency installtion
more convenient
  debian: add and install pve-nvidia-sriov systemd template unit file
  script: install headers for running and newer kernel version only


pve-manager:

Hannes Duerr (1):
  debian/control: add pve-nvidia-vgpu-helper as dependency

 debian/control | 1 +
 1 file changed, 1 insertion(+)


Summary over all repositories:
  1 files changed, 1 insertions(+), 0 deletions(-)

-- 
Generated by git-murpp 0.8.0


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v3 1/5] create a debian package to make the installation of Nvidia vGPU drivers more convenient

2025-01-24 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/changelog |  5 +
 debian/control   | 15 +++
 debian/copyright | 14 ++
 debian/rules |  8 
 debian/source/format |  1 +
 5 files changed, 43 insertions(+)
 create mode 100644 debian/changelog
 create mode 100644 debian/control
 create mode 100644 debian/copyright
 create mode 100755 debian/rules
 create mode 100644 debian/source/format

diff --git a/debian/changelog b/debian/changelog
new file mode 100644
index 000..de5e10a
--- /dev/null
+++ b/debian/changelog
@@ -0,0 +1,5 @@
+pve-nvidia-vgpu-helper (8.3.3) UNRELEASED; urgency=medium
+
+  * Initial release.
+
+ -- Proxmox Support Team   Mon, 20 Jan 2025 17:02:52 +0100
diff --git a/debian/control b/debian/control
new file mode 100644
index 000..334bf25
--- /dev/null
+++ b/debian/control
@@ -0,0 +1,15 @@
+Source: pve-nvidia-vgpu-helper
+Section: admin
+Priority: optional
+Maintainer: Proxmox Support Team 
+Build-Depends: debhelper-compat (= 13), lintian,
+Standards-Version: 4.6.2
+Homepage: https://www.proxmox.com
+
+Package: pve-nvidia-vgpu-helper
+Architecture: all
+Depends: ${misc:Depends},
+Description: Proxmox Nvidia vGPU helper script and systemd service
+ This package provides a script, that helps with installing all required
+ packages for the Nvidia vGPU driver, and also a systemd template service which
+ configures the option SRI-OV per pci-id
diff --git a/debian/copyright b/debian/copyright
new file mode 100644
index 000..046356b
--- /dev/null
+++ b/debian/copyright
@@ -0,0 +1,14 @@
+Copyright (C) 2016 - 2024 Proxmox Server Solutions GmbH 
+
+   This program is free software: you can redistribute it and/or modify
+   it under the terms of the GNU Affero General Public License as
+   published by the Free Software Foundation, either version 3 of the
+   License, or (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU Affero General Public License for more details.
+
+   You should have received a copy of the GNU Affero General Public License
+   along with this program.  If not, see <https://www.gnu.org/licenses/>.
diff --git a/debian/rules b/debian/rules
new file mode 100755
index 000..218df65
--- /dev/null
+++ b/debian/rules
@@ -0,0 +1,8 @@
+#!/usr/bin/make -f
+# -*- makefile -*-
+
+# Uncomment this to turn on verbose mode.
+#export DH_VERBOSE=1
+
+%:
+   dh $@
diff --git a/debian/source/format b/debian/source/format
new file mode 100644
index 000..89ae9db
--- /dev/null
+++ b/debian/source/format
@@ -0,0 +1 @@
+3.0 (native)
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v3 3/5] add pve-nvidia-vgpu-helper and Makefile to make dependency installtion more convenient

2025-01-24 Thread Hannes Duerr
we add the pve-nvidia-vgpu-helper script to make the installation of the
required nvidia vgpu driver dependencies more convenient.
We also add a Makefile to assist in building the debian package and
installing the script

Signed-off-by: Hannes Duerr 
---
 Makefile   | 54 
 pve-nvidia-vgpu-helper | 63 ++
 2 files changed, 117 insertions(+)
 create mode 100644 Makefile
 create mode 100755 pve-nvidia-vgpu-helper

diff --git a/Makefile b/Makefile
new file mode 100644
index 000..c6e461d
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,54 @@
+include /usr/share/dpkg/default.mk
+
+PACKAGE=pve-nvidia-vgpu-helper
+
+BINDIR=/usr/bin/
+DESTDIR=
+
+GITVERSION:=$(shell git rev-parse HEAD)
+
+BUILDDIR ?= $(PACKAGE)-$(DEB_VERSION)
+DSC=$(PACKAGE)_$(DEB_VERSION).dsc
+
+DEB=$(PACKAGE)_$(DEB_VERSION_UPSTREAM_REVISION)_all.deb
+
+all:
+deb: $(DEB)
+
+$(BUILDDIR): debian
+   rm -rf $@ $@.tmp
+   rsync -a * $@.tmp/
+   echo "git clone 
git://git.proxmox.com/git/pve-nvidia-vgpu-helper.git\\ngit checkout 
$(GITVERSION)" > $@.tmp/debian/SOURCE
+   mv $@.tmp $@
+
+$(DEB): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -b -uc -us
+   lintian $(DEB)
+
+dsc: $(DSC)
+   $(MAKE) clean
+   $(MAKE) $(DSC)
+   lintian $(DSC)
+
+$(DSC): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -S -uc -us
+
+sbuild: $(DSC)
+   sbuild $(DSC)
+
+.PHONY: install
+install: pve-nvidia-vgpu-helper
+   install -d $(DESTDIR)$(BINDIR)
+   install -m 0755 pve-nvidia-vgpu-helper $(DESTDIR)$(BINDIR)
+
+.PHONY: upload
+upload: UPLOAD_DIST ?= $(DEB_DISTRIBUTION)
+upload: $(DEB)
+   tar cf - $(DEB)|ssh repo...@repo.proxmox.com -- upload --product pve 
--dist $(UPLOAD_DIST)
+
+.PHONY: distclean
+distclean: clean
+
+.PHONY: clean
+clean:
+   rm -rf *~ $(PACKAGE)-[0-9]*/ $(PACKAGE)*.tar.* *.deb *.dsc *.changes 
*.build *.buildinfo
diff --git a/pve-nvidia-vgpu-helper b/pve-nvidia-vgpu-helper
new file mode 100755
index 000..885b879
--- /dev/null
+++ b/pve-nvidia-vgpu-helper
@@ -0,0 +1,63 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use PVE::Tools qw(run_command);
+use AptPkg::Cache;
+
+my @apt_install = qw(apt-get --no-install-recommends install --);
+my @dependencies = qw(dkms libc6-dev proxmox-default-headers);
+my @missing_packages;
+
+die "Please execute the script with root privileges\n" if $>;
+
+my $apt_cache = AptPkg::Cache->new();
+die "unable to initialize AptPkg::Cache\n" if !$apt_cache; 
+
+sub package_is_installed {
+my ($package) = @_;
+my $p = $apt_cache->{$package};
+if (!defined($p->{CurrentState}) || $p->{CurrentState} ne "Installed") {
+   push(@missing_packages, $package);
+}
+}
+
+foreach my $dependency (@dependencies) {
+package_is_installed($dependency);
+}
+
+
+my $running_kernel;
+run_command( ['/usr/bin/uname', '-r' ],
+outfunc => sub { $running_kernel = shift } );
+
+run_command(['/usr/bin/dpkg-query', '-W', 'proxmox-kernel-*-pve'],
+outfunc => sub {
+   my $installed_kernel = shift;
+   $installed_kernel =~ 
s/^\s*proxmox-kernel(-\d+.\d+.\d+-\d+-pve)\s*$/proxmox-headers$1/;
+   package_is_installed($installed_kernel);
+});
+
+
+
+if ($running_kernel =~ m/^\d+\.\d+\.\d+-\d+-pve$/) {
+print "You are running the proxmox kernel 
`proxmox-kernel-$running_kernel`\n";
+} else {
+die "You are not using a proxmox-kernel, please make sure that the 
appropriate header package is installed.\n";
+}
+
+if (!@missing_packages){
+print "All required packages are installed, you can continue with the 
Nvidia vGPU driver installation.\n";
+exit;
+} else {
+print "The following packages are missing:\n" . join("\n", 
@missing_packages) ."\n";
+print "Would you like to install them now (y/n)?\n";
+}
+
+my $answer = ;
+if (defined($answer) && $answer =~ m/^\s*y(?:es)?\s*$/i) {
+if (system(@apt_install, @missing_packages) != 0) {
+   die "apt failed during the installation: ($?)\n";
+}
+}
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v3 2/5] debian/control: add dependency for helper script

2025-01-24 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/control | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/debian/control b/debian/control
index 334bf25..4492b60 100644
--- a/debian/control
+++ b/debian/control
@@ -8,7 +8,8 @@ Homepage: https://www.proxmox.com
 
 Package: pve-nvidia-vgpu-helper
 Architecture: all
-Depends: ${misc:Depends},
+Depends: libapt-pkg-perl,
+ ${misc:Depends},
 Description: Proxmox Nvidia vGPU helper script and systemd service
  This package provides a script, that helps with installing all required
  packages for the Nvidia vGPU driver, and also a systemd template service which
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-manager v3 1/1] debian/control: add pve-nvidia-vgpu-helper as dependency

2025-01-24 Thread Hannes Duerr
the package ships a script that helps to set up Nvidia vgpu drivers.

Signed-off-by: Hannes Duerr 
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index 6c94df09..ab02fd76 100644
--- a/debian/control
+++ b/debian/control
@@ -89,6 +89,7 @@ Depends: apt (>= 1.5~),
  pve-firewall,
  pve-ha-manager,
  pve-i18n (>= 3.2.0~),
+ pve-nvidia-vgpu-helper,
  pve-xtermjs (>= 4.7.0-1),
  qemu-server (>= 8.2.7),
  rsync,
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v3 4/5] debian: add and install pve-nvidia-sriov systemd template unit file

2025-01-24 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/pve-nvidia-sriov@.service | 13 +
 debian/rules |  3 +++
 2 files changed, 16 insertions(+)
 create mode 100644 debian/pve-nvidia-sriov@.service

diff --git a/debian/pve-nvidia-sriov@.service b/debian/pve-nvidia-sriov@.service
new file mode 100644
index 000..3706d04
--- /dev/null
+++ b/debian/pve-nvidia-sriov@.service
@@ -0,0 +1,13 @@
+[Unit]
+Description=Enable NVIDIA SR-IOV for PCI ID %i
+ConditionPathExists=/usr/lib/nvidia/sriov-manage
+After=network.target nvidia-vgpud.service nvidia-vgpu-mgr.service
+Before=pve-guests.service
+
+[Service]
+Type=oneshot
+ExecStartPre=/bin/sleep 5
+ExecStart=/usr/lib/nvidia/sriov-manage -e %i
+
+[Install]
+WantedBy=multi-user.target
diff --git a/debian/rules b/debian/rules
index 218df65..d5fe1f6 100755
--- a/debian/rules
+++ b/debian/rules
@@ -6,3 +6,6 @@
 
 %:
dh $@
+
+override_dh_installsystemd:
+   dh_installsystemd --no-start --no-enable --name pve-nvidia-sriov@ 
pve-nvidia-sriov@.service
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v2 1/5] create a debian package to make the installation of Nvidia vGPU drivers more convenient

2025-01-24 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/changelog |  5 +
 debian/control   | 15 +++
 debian/copyright | 14 ++
 debian/rules |  8 
 debian/source/format |  1 +
 5 files changed, 43 insertions(+)
 create mode 100644 debian/changelog
 create mode 100644 debian/control
 create mode 100644 debian/copyright
 create mode 100755 debian/rules
 create mode 100644 debian/source/format

diff --git a/debian/changelog b/debian/changelog
new file mode 100644
index 000..de5e10a
--- /dev/null
+++ b/debian/changelog
@@ -0,0 +1,5 @@
+pve-nvidia-vgpu-helper (8.3.3) UNRELEASED; urgency=medium
+
+  * Initial release.
+
+ -- Proxmox Support Team   Mon, 20 Jan 2025 17:02:52 +0100
diff --git a/debian/control b/debian/control
new file mode 100644
index 000..334bf25
--- /dev/null
+++ b/debian/control
@@ -0,0 +1,15 @@
+Source: pve-nvidia-vgpu-helper
+Section: admin
+Priority: optional
+Maintainer: Proxmox Support Team 
+Build-Depends: debhelper-compat (= 13), lintian,
+Standards-Version: 4.6.2
+Homepage: https://www.proxmox.com
+
+Package: pve-nvidia-vgpu-helper
+Architecture: all
+Depends: ${misc:Depends},
+Description: Proxmox Nvidia vGPU helper script and systemd service
+ This package provides a script, that helps with installing all required
+ packages for the Nvidia vGPU driver, and also a systemd template service which
+ configures the option SRI-OV per pci-id
diff --git a/debian/copyright b/debian/copyright
new file mode 100644
index 000..046356b
--- /dev/null
+++ b/debian/copyright
@@ -0,0 +1,14 @@
+Copyright (C) 2016 - 2024 Proxmox Server Solutions GmbH 
+
+   This program is free software: you can redistribute it and/or modify
+   it under the terms of the GNU Affero General Public License as
+   published by the Free Software Foundation, either version 3 of the
+   License, or (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU Affero General Public License for more details.
+
+   You should have received a copy of the GNU Affero General Public License
+   along with this program.  If not, see <https://www.gnu.org/licenses/>.
diff --git a/debian/rules b/debian/rules
new file mode 100755
index 000..218df65
--- /dev/null
+++ b/debian/rules
@@ -0,0 +1,8 @@
+#!/usr/bin/make -f
+# -*- makefile -*-
+
+# Uncomment this to turn on verbose mode.
+#export DH_VERBOSE=1
+
+%:
+   dh $@
diff --git a/debian/source/format b/debian/source/format
new file mode 100644
index 000..89ae9db
--- /dev/null
+++ b/debian/source/format
@@ -0,0 +1 @@
+3.0 (native)
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server] cfg2cmd: use tpm-tis and tpm-tis-device depending on the arch

2025-01-15 Thread Hannes Duerr
When using arch aarch64 for a VM in combination with new enough Windows
OS type the VM start fails:

> qemu-system-aarch64: -device tpm-tis,tpmdev=tpmdev: 'tpm-tis' is not a valid 
> device model name

QEMU uses the `tpm-tis-device` device model for ARM[0] and RISCV[1]
instead of the `tmp-tis` which is used for x86_64.

This patch is a follow-up to [2].

[0] https://www.qemu.org/docs/master/specs/tpm.html#the-qemu-tpm-emulator-device
[1] https://www.qemu.org/docs/master/system/riscv/virt.html#enabling-tpm
[2] 
https://lore.proxmox.com/pve-devel/20250113135638.88099-1-f.eb...@proxmox.com/

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 43008f3f..f7cb5fcb 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3203,7 +3203,7 @@ sub get_tpm_paths {
 }
 
 sub add_tpm_device {
-my ($vmid, $devices, $conf) = @_;
+my ($vmid, $devices, $conf, $arch) = @_;
 
 return if !$conf->{tpmstate0};
 
@@ -3211,7 +3211,11 @@ sub add_tpm_device {
 
 push @$devices, "-chardev", "socket,id=tpmchar,path=$paths->{socket}";
 push @$devices, "-tpmdev", "emulator,id=tpmdev,chardev=tpmchar";
-push @$devices, "-device", "tpm-tis,tpmdev=tpmdev";
+if ($arch eq 'x86_64') {
+   push @$devices, "-device", "tpm-tis,tpmdev=tpmdev";
+} else {
+   push @$devices, "-device", "tpm-tis-device,tpmdev=tpmdev";
+}
 }
 
 sub start_swtpm {
@@ -3838,7 +3842,7 @@ sub config_to_command {
 
 # Add a TPM only if the VM is not a template,
 # to support backing up template VMs even if the TPM disk is 
write-protected.
-add_tpm_device($vmid, $devices, $conf) if 
(!PVE::QemuConfig->is_template($conf));
+add_tpm_device($vmid, $devices, $conf, $arch) if 
(!PVE::QemuConfig->is_template($conf));
 
 my $sockets = 1;
 $sockets = $conf->{smp} if $conf->{smp}; # old style - no longer iused
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v5 2/4] debian/control: add dependency for helper script

2025-02-13 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/control | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/debian/control b/debian/control
index 334bf25..352e63a 100644
--- a/debian/control
+++ b/debian/control
@@ -8,7 +8,9 @@ Homepage: https://www.proxmox.com
 
 Package: pve-nvidia-vgpu-helper
 Architecture: all
-Depends: ${misc:Depends},
+Depends: libapt-pkg-perl,
+ libdpkg-perl,
+ ${misc:Depends},
 Description: Proxmox Nvidia vGPU helper script and systemd service
  This package provides a script, that helps with installing all required
  packages for the Nvidia vGPU driver, and also a systemd template service which
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v5 3/4] add pve-nvidia-vgpu-helper and Makefile to make dependency installation more convenient

2025-02-13 Thread Hannes Duerr
We add the pve-nvidia-vgpu-helper script to simplify the installation of
the required Nvidia Vgpu driver dependencies.
The script performs the following tasks
- install consistent dependencies
- check the currently running kernel and install the necessary kernel
  headers for the running kernel and any newer kernels installed
- blacklist the competing nouveau driver and add the opt-out flag
  --no-blacklist
We also add a Makefile to help build the Debian package and install the
script

Signed-off-by: Hannes Duerr 
---

Notes:
Changes in V5:
* Add syslog Message that mentions nouveau blacklist

Changes in V4:
* add `--help` option displaying the usage
* as suggested by @Dominik we squash the patch, to only install headers
  for running kernel version and newer ones, into this one
* install `proxmox-headers-$major.$minor-pve` package so that the
  headers for future updates are also installed directly
* blacklist the nouveau driver by default and add opt-out flag
  `--no-blacklist`

 Makefile   | 54 +++
 pve-nvidia-vgpu-helper | 99 ++
 2 files changed, 153 insertions(+)
 create mode 100644 Makefile
 create mode 100755 pve-nvidia-vgpu-helper

diff --git a/Makefile b/Makefile
new file mode 100644
index 000..c6e461d
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,54 @@
+include /usr/share/dpkg/default.mk
+
+PACKAGE=pve-nvidia-vgpu-helper
+
+BINDIR=/usr/bin/
+DESTDIR=
+
+GITVERSION:=$(shell git rev-parse HEAD)
+
+BUILDDIR ?= $(PACKAGE)-$(DEB_VERSION)
+DSC=$(PACKAGE)_$(DEB_VERSION).dsc
+
+DEB=$(PACKAGE)_$(DEB_VERSION_UPSTREAM_REVISION)_all.deb
+
+all:
+deb: $(DEB)
+
+$(BUILDDIR): debian
+   rm -rf $@ $@.tmp
+   rsync -a * $@.tmp/
+   echo "git clone 
git://git.proxmox.com/git/pve-nvidia-vgpu-helper.git\\ngit checkout 
$(GITVERSION)" > $@.tmp/debian/SOURCE
+   mv $@.tmp $@
+
+$(DEB): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -b -uc -us
+   lintian $(DEB)
+
+dsc: $(DSC)
+   $(MAKE) clean
+   $(MAKE) $(DSC)
+   lintian $(DSC)
+
+$(DSC): $(BUILDDIR)
+   cd $(BUILDDIR); dpkg-buildpackage -S -uc -us
+
+sbuild: $(DSC)
+   sbuild $(DSC)
+
+.PHONY: install
+install: pve-nvidia-vgpu-helper
+   install -d $(DESTDIR)$(BINDIR)
+   install -m 0755 pve-nvidia-vgpu-helper $(DESTDIR)$(BINDIR)
+
+.PHONY: upload
+upload: UPLOAD_DIST ?= $(DEB_DISTRIBUTION)
+upload: $(DEB)
+   tar cf - $(DEB)|ssh repo...@repo.proxmox.com -- upload --product pve 
--dist $(UPLOAD_DIST)
+
+.PHONY: distclean
+distclean: clean
+
+.PHONY: clean
+clean:
+   rm -rf *~ $(PACKAGE)-[0-9]*/ $(PACKAGE)*.tar.* *.deb *.dsc *.changes 
*.build *.buildinfo
diff --git a/pve-nvidia-vgpu-helper b/pve-nvidia-vgpu-helper
new file mode 100755
index 000..c162de9
--- /dev/null
+++ b/pve-nvidia-vgpu-helper
@@ -0,0 +1,99 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use PVE::Tools qw(run_command);
+use PVE::SysFSTools;
+use PVE::SafeSyslog;
+
+use AptPkg::Cache;
+use Dpkg::Version;
+use Getopt::Long;
+
+my @apt_install = qw(apt-get --no-install-recommends install --);
+my @dependencies = qw(dkms libc6-dev proxmox-default-headers);
+my @missing_packages;
+
+die "Please execute the script with root privileges\n" if $>;
+
+my $apt_cache = AptPkg::Cache->new();
+die "unable to initialize AptPkg::Cache\n" if !$apt_cache; 
+
+GetOptions('no-blacklist' => \my $no_blacklist, 'help' => \my $help);
+
+if (defined($help)) {
+print("USAGE:\tpve-nvidia-vgpu-helper [OPTIONS]\n");
+print("\t --help\n");
+print("\t --no-blacklist\n");
+exit;
+}
+
+if (!defined($no_blacklist) && !-e "/etc/modprobe.d/block-nouveau.conf") {
+run_command(["mkdir", "-p", "/etc/modprobe.d/"]);
+PVE::SysFSTools::file_write( "/etc/modprobe.d/block-nouveau.conf",
+"blacklist nouveau" )
+  || die "Could not create block-nouveau.conf";
+syslog('info', "Blacklist nouveau driver");
+
+run_command(["update-initramfs", "-u", "-k", "all"]);
+}
+
+sub package_is_installed {
+my ($package) = @_;
+my $p = $apt_cache->{$package};
+if (!defined($p->{CurrentState}) || $p->{CurrentState} ne "Installed") {
+   push(@missing_packages, $package);
+}
+}
+
+sub install_newer_headers {
+my (%installed_versions) = @_;
+for my $version (keys(%installed_versions)) {
+   # install header for the running kernel and newer kernel versions
+   package_is_installed("proxmox-headers-$version");
+}
+}
+
+foreach my $dependency (@dependencies) {
+package_is_installed($dependency);
+}
+
+
+my $running_kernel;
+run_command( ['/usr/bin/uname', '-r' ],
+outfunc => sub { $running_kernel = shift } 

[pve-devel] [PATCH manager/nvidia-vgpu-helper v5 0/5] reduce setup steps for nvidia vgpu drivers

2025-02-13 Thread Hannes Duerr
Changes in v5:
in commits

Changes in v4:
in commits

Changes in v3:
* install headers for every installed kernel version by default
* additionally add patch to only install headers for running kernel
  version and newer ones, this requires the new dependency
  "libdpkg-perl"
* remove unnecessary intrusive "Dpkg:Options::=--force-confnew"
* rename systemd template unit to "pve-nvidia-sriov@.service"
* check if path "/usr/lib/nvidia/sriov-manage" exists in systemd
  template unit


Changes in v2:
* patches contain all changes to build new repository
* make pve-manager depend on this package instead of the other way around
* install the script to /usr/bin/
* rename the script to pve-nvidia-vgpu-helper because it is only
  relevant for PVE(the repository should therefore also be renamed
  when created)

The aim of the repository is to reduce the necessary installation
steps for the Nvidia VGPU drivers [0]. The package installs a script
which can be used to check and install necessary dependencies and a
systemd template service which can be used to configure the SR-IOV per
pci-id

Part of the changes would later be the adjustment of the wiki page

[0] https://pve.proxmox.com/wiki/NVIDIA_vGPU_on_Proxmox_VE

pve-nvidia-vgpu-helper:

Hannes Duerr (4):
  create a debian package to make the installation of Nvidia vGPU
drivers more convenient
  debian/control: add dependency for helper script
  add pve-nvidia-vgpu-helper and Makefile to make dependency
installation more convenient
  debian: add and install pve-nvidia-sriov systemd template unit file


pve-manager:

Hannes Duerr (1):
  debian/control: add pve-nvidia-vgpu-helper as dependency

 debian/control | 1 +
 1 file changed, 1 insertion(+)


Summary over all repositories:
  1 files changed, 1 insertions(+), 0 deletions(-)

-- 
Generated by git-murpp 0.8.0


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-manager v5 1/1] debian/control: add pve-nvidia-vgpu-helper as dependency

2025-02-13 Thread Hannes Duerr
the package ships a script that helps to set up Nvidia vgpu drivers.

Signed-off-by: Hannes Duerr 
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index 6c94df09..ab02fd76 100644
--- a/debian/control
+++ b/debian/control
@@ -89,6 +89,7 @@ Depends: apt (>= 1.5~),
  pve-firewall,
  pve-ha-manager,
  pve-i18n (>= 3.2.0~),
+ pve-nvidia-vgpu-helper,
  pve-xtermjs (>= 4.7.0-1),
  qemu-server (>= 8.2.7),
  rsync,
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v5 1/4] create a debian package to make the installation of Nvidia vGPU drivers more convenient

2025-02-13 Thread Hannes Duerr
Signed-off-by: Hannes Duerr 
---
 debian/changelog |  5 +
 debian/control   | 15 +++
 debian/copyright | 14 ++
 debian/rules |  8 
 debian/source/format |  1 +
 5 files changed, 43 insertions(+)
 create mode 100644 debian/changelog
 create mode 100644 debian/control
 create mode 100644 debian/copyright
 create mode 100755 debian/rules
 create mode 100644 debian/source/format

diff --git a/debian/changelog b/debian/changelog
new file mode 100644
index 000..de5e10a
--- /dev/null
+++ b/debian/changelog
@@ -0,0 +1,5 @@
+pve-nvidia-vgpu-helper (8.3.3) UNRELEASED; urgency=medium
+
+  * Initial release.
+
+ -- Proxmox Support Team   Mon, 20 Jan 2025 17:02:52 +0100
diff --git a/debian/control b/debian/control
new file mode 100644
index 000..334bf25
--- /dev/null
+++ b/debian/control
@@ -0,0 +1,15 @@
+Source: pve-nvidia-vgpu-helper
+Section: admin
+Priority: optional
+Maintainer: Proxmox Support Team 
+Build-Depends: debhelper-compat (= 13), lintian,
+Standards-Version: 4.6.2
+Homepage: https://www.proxmox.com
+
+Package: pve-nvidia-vgpu-helper
+Architecture: all
+Depends: ${misc:Depends},
+Description: Proxmox Nvidia vGPU helper script and systemd service
+ This package provides a script, that helps with installing all required
+ packages for the Nvidia vGPU driver, and also a systemd template service which
+ configures the option SRI-OV per pci-id
diff --git a/debian/copyright b/debian/copyright
new file mode 100644
index 000..046356b
--- /dev/null
+++ b/debian/copyright
@@ -0,0 +1,14 @@
+Copyright (C) 2016 - 2024 Proxmox Server Solutions GmbH 
+
+   This program is free software: you can redistribute it and/or modify
+   it under the terms of the GNU Affero General Public License as
+   published by the Free Software Foundation, either version 3 of the
+   License, or (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU Affero General Public License for more details.
+
+   You should have received a copy of the GNU Affero General Public License
+   along with this program.  If not, see <https://www.gnu.org/licenses/>.
diff --git a/debian/rules b/debian/rules
new file mode 100755
index 000..218df65
--- /dev/null
+++ b/debian/rules
@@ -0,0 +1,8 @@
+#!/usr/bin/make -f
+# -*- makefile -*-
+
+# Uncomment this to turn on verbose mode.
+#export DH_VERBOSE=1
+
+%:
+   dh $@
diff --git a/debian/source/format b/debian/source/format
new file mode 100644
index 000..89ae9db
--- /dev/null
+++ b/debian/source/format
@@ -0,0 +1 @@
+3.0 (native)
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-nvidia-vgpu-helper v5 4/4] debian: add and install pve-nvidia-sriov systemd template unit file

2025-02-13 Thread Hannes Duerr
SR-IOV must be enabled each time the system is restarted.
This systemd service should take over this task and enable SR-IOV per
pci-id/gpu after a system restart.

Signed-off-by: Hannes Duerr 
---

Notes:
Changes in v4:
* Change nvidia-vgpud.service nvidia-vgpu-mgr.service to `Before=`
  targets and remove the 5 seconds sleep in `ExecStartPre=` because it
  is not needed anymore

 debian/pve-nvidia-sriov@.service | 12 
 debian/rules |  3 +++
 2 files changed, 15 insertions(+)
 create mode 100644 debian/pve-nvidia-sriov@.service

diff --git a/debian/pve-nvidia-sriov@.service b/debian/pve-nvidia-sriov@.service
new file mode 100644
index 000..f2e4c83
--- /dev/null
+++ b/debian/pve-nvidia-sriov@.service
@@ -0,0 +1,12 @@
+[Unit]
+Description=Enable NVIDIA SR-IOV for PCI ID %i
+ConditionPathExists=/usr/lib/nvidia/sriov-manage
+After=network.target 
+Before=pve-guests.service nvidia-vgpud.service nvidia-vgpu-mgr.service
+
+[Service]
+Type=oneshot
+ExecStart=/usr/lib/nvidia/sriov-manage -e %i
+
+[Install]
+WantedBy=multi-user.target
diff --git a/debian/rules b/debian/rules
index 218df65..d5fe1f6 100755
--- a/debian/rules
+++ b/debian/rules
@@ -6,3 +6,6 @@
 
 %:
dh $@
+
+override_dh_installsystemd:
+   dh_installsystemd --no-start --no-enable --name pve-nvidia-sriov@ 
pve-nvidia-sriov@.service
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v3] ui: vm console: autodetect novnc or xtermjs

2025-03-25 Thread Hannes Duerr
I just noticed that there is a open bugtracker for this issue/feature 
request [0], so you can assign your self and add the bugtracker number 
to the commit message.



[0] https://bugzilla.proxmox.com/show_bug.cgi?id=1926

On 2/25/25 16:47, Aaron Lauterer wrote:
[...]


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel