[pve-devel] [PATCH v2 qemu-server] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-08 Thread Hannes Duerr
adds vendor and product information for SCSI devices to the json schema and
checks in the VM create/update API call if it is possible to add these to QEMU 
as a device option

Signed-off-by: Hannes Duerr 
---

changes in v2:
- when calling the API to create/update a VM, check whether the devices
are "scsi-hd" or "scsi-cd" devices,where there is the option to add
vendor and product information, if not error out
- change the format in product_fmt and vendor_fmt to a pattern that only
allows 40 characters consisting of upper and lower case letters, numbers and 
'-' and '_'.

 PVE/API2/Qemu.pm|  9 +
 PVE/QemuServer.pm   | 83 +
 PVE/QemuServer/Drive.pm | 24 
 3 files changed, 92 insertions(+), 24 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 38bdaab..6898ec9 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1030,6 +1030,11 @@ __PACKAGE__->register_method({
);
$conf->{$_} = $created_opts->{$_} for keys 
$created_opts->%*;
 
+   foreach my $opt (keys $created_opts->%*) {
+   if ($opt =~ m/scsi/) {
+   
PVE::QemuServer::check_scsi_feature_compatibility($opt, $created_opts, $conf, 
$storecfg, $param);
+   }
+   }
if (!$conf->{boot}) {
my $devs = 
PVE::QemuServer::get_default_bootdevices($conf);
$conf->{boot} = PVE::QemuServer::print_bootorder($devs);
@@ -1840,6 +1845,10 @@ my $update_vm_api  = sub {
);
$conf->{pending}->{$_} = $created_opts->{$_} for keys 
$created_opts->%*;
 
+   if ($opt =~ m/scsi/) {
+   PVE::QemuServer::check_scsi_feature_compatibility($opt, 
$created_opts, $conf, $storecfg, $param);
+   }
+
# default legacy boot order implies all cdroms anyway
if (@bootorder) {
# append new CD drives to bootorder to mark them 
bootable
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index dbcd568..919728b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -26,6 +26,7 @@ use Storable qw(dclone);
 use Time::HiRes qw(gettimeofday usleep);
 use URI::Escape;
 use UUID;
+use Data::Dumper;
 
 use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file);
 use PVE::CGroup;
@@ -1428,6 +1429,53 @@ my sub get_drive_id {
 return "$drive->{interface}$drive->{index}";
 }
 
+sub get_scsi_devicetype {
+my ($drive, $storecfg, $machine_type) = @_;
+
+my $devicetype = 'hd';
+my $path = '';
+if (drive_is_cdrom($drive)) {
+   $devicetype = 'cd';
+} else {
+   if ($drive->{file} =~ m|^/|) {
+   $path = $drive->{file};
+   if (my $info = path_is_scsi($path)) {
+   if ($info->{type} == 0 && $drive->{scsiblock}) {
+   $devicetype = 'block';
+   } elsif ($info->{type} == 1) { # tape
+   $devicetype = 'generic';
+   }
+   }
+   } else {
+   $path = PVE::Storage::path($storecfg, $drive->{file});
+   }
+
+   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
+   my $version = kvm_user_version();
+   $version = extract_version($machine_type, $version);
+   if ($path =~ m/^iscsi\:\/\// &&
+  !min_version($version, 4, 1)) {
+   $devicetype = 'generic';
+   }
+}
+
+return $devicetype;
+}
+
+sub check_scsi_feature_compatibility {
+my($opt, $created_opts, $conf, $storecfg, $param) = @_;
+
+my $drive = parse_drive($opt, $created_opts->{$opt});
+my $machine_type = get_vm_machine($conf, undef, $conf->{arch});
+my $drivetype = get_scsi_devicetype($drive, $storecfg, $machine_type);
+
+if ($drivetype ne 'hd' && $drivetype ne 'cd') {
+   if ($param->{$opt} =~ m/vendor/ || $param->{$opt} =~ m/product/) {
+   die "only 'scsi-hd' and 'scsi-cd' devices support passing vendor 
and product information\n";
+   }
+}
+}
+
 sub print_drivedevice_full {
 my ($storecfg, $conf, $vmid, $drive, $bridges, $arch, $machine_type) = @_;
 
@@ -1443,31 +1491,8 @@ sub print_drivedevice_full {
 
my ($maxdev, $controller, $controller_prefix) = scsihw_infos($conf, 
$drive);
my $unit = $drive->{index} % $maxdev;
-   my $devicetype = 'hd';
-   my $path = '';
-   if (drive_is_cdrom($drive)) {
-   $devicetype = 'cd';
-   } else {
-   if ($drive->{file} =~ m|^/|) {
-   $path = $drive->{file};
-   if (my $info = path_is_scsi($path)) {
-   if ($info->{type} == 0 && $drive->{scsiblock}) {
-   $devicetype = 'block';
-   } elsif ($info->{type} == 1) { # tape
-   $devicetype = 'generic';
-   }
-   }
-   } el

[pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs

2023-11-08 Thread Markus Frank
build-order:
1. cluster
2. guest-common
3. docs
4. qemu-server
5. manager

I did not get virtiofsd to run with run_command without creating zombie
processes after stutdown.
So I replaced run_command with exec for now. 
Maybe someone can find out why this happens.


cluster:

Markus Frank (1):
  add mapping/dir.cfg for resource mapping

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)


guest-common:

v7:
* renamed DIR to Dir
* made xattr & acl settings per directory-id and not per node

Markus Frank (1):
  add Dir mapping config

 src/Makefile   |   1 +
 src/PVE/Mapping/Dir.pm | 177 +
 2 files changed, 178 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm


docs:

v8:
 * added "Known Limitations"
 * removed old mount tag

Markus Frank (1):
  added shared filesystem doc for virtio-fs

 qm.adoc | 84 +++--
 1 file changed, 82 insertions(+), 2 deletions(-)


qemu-server:

v8:
 * changed permission checks to cover cloning and restoring and
 made the helper functions similar to the PCI, USB permission check functions.
 * warn if acl is activated on Windows VM, since the virtiofs device cannot be
 mounted on Windows if acl is on and moved with dir config validation to
 its own function. This function is called in config_to_command so that
 no virtiofsd command is running although qmstart died because a second
 virtiofs device was incorrectly configured.

v7:
 * enabled use of hugepages
 * renamed variables
 * added acl & xattr parameters that overwrite the default directory
 mapping settings

v6:
 * added virtiofsd dependency
 * 2 new patches:
* Permission check for virtiofs directory access
* check_local_resources: virtiofs

v5:
 * allow numa settings with virtio-fs
 * added direct-io & cache settings
 * changed to rust implementation of virtiofsd
 * made double fork and closed all file descriptor so that the lockfile
 gets released.

v3:
 * created own socket and get file descriptor for virtiofsd
 so there is no race between starting virtiofsd & qemu
 * added TODO to replace virtiofsd with rust implementation in bookworm
 (I packaged the rust implementation for bookworm & the C implementation
 in qemu will be removed in qemu 8.0)

v2:
 * replaced sharedfiles_fmt path in qemu-server with dirid:
 * user can use the dirid to specify the directory without requiring root access

Markus Frank (3):
  feature #1027: virtio-fs support
  Permission check for virtiofs directory access
  check_local_resources: virtiofs

 PVE/API2/Qemu.pm |  39 ++-
 PVE/QemuServer.pm| 200 ++-
 PVE/QemuServer/Memory.pm |  25 +++--
 debian/control   |   1 +
 test/MigrationTest/Shared.pm |   7 ++
 5 files changed, 263 insertions(+), 9 deletions(-)


manager:

v8: removed ui patches for now

Markus Frank (1):
  api: add resource map api endpoints for directories

 PVE/API2/Cluster/Mapping.pm   |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm   | 309 ++
 PVE/API2/Cluster/Mapping/Makefile |   3 +-
 3 files changed, 318 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm

-- 
2.39.2

>From d327cf6a5f7108518f95bf133008ea449b8385c9 Mon Sep 17 00:00:00 2001
From: Markus Frank 
Date: Wed, 8 Nov 2023 09:14:13 +0100
Subject: [PATCH qemu-server v8 0/6] *** SUBJECT HERE ***

*** BLURB HERE ***


-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs v8 3/7] added shared filesystem doc for virtio-fs

2023-11-08 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 qm.adoc | 84 +++--
 1 file changed, 82 insertions(+), 2 deletions(-)

diff --git a/qm.adoc b/qm.adoc
index c4f1024..571c42e 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -996,6 +996,85 @@ recommended to always use a limiter to avoid guests using 
too many host
 resources. If desired, a value of '0' for `max_bytes` can be used to disable
 all limits.
 
+[[qm_virtiofs]]
+Virtio-fs
+~
+
+Virtio-fs is a shared file system, that enables sharing a directory between
+host and guest VM while taking advantage of the locality of virtual machines
+and the hypervisor to get a higher throughput than 9p.
+
+To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
+needs to run in the background.
+In {pve} this process starts immediately before the start of QEMU.
+
+Linux VMs with kernel >=5.4 support this feature by default.
+
+There is a guide available on how to utilize virtio-fs in Windows VMs.
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
+
+Known limitations
+^
+
+* Virtiofsd crashing means no recovery until VM is fully stopped
+and restarted.
+* Virtiofsd not responding may result in NFS-like hanging access in the VM.
+* Memory hotplug does not work in combination with virtio-fs.
+* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
+otherwise the virtio-fs device will not be visible within the VMs.
+
+Add mapping for Shared Directories
+^^
+
+To add a mapping, either use the API directly with pvesh as described in the
+xref:resource_mapping[Resource Mapping] section,
+or add the mapping to the configuration file `/etc/pve/mapping/dir.cfg`:
+
+
+some-dir-id
+map node=node1,path=/mnt/share/,submounts=1
+map node=node2,path=/mnt/share/,
+xattr 1
+acl 1
+
+
+Set `submounts` to `1` when multiple file systems are mounted in a
+shared directory.
+
+Add virtio-fs to VM
+^^^
+
+To share a directory using virtio-fs, you need to specify the directory ID
+(dirid) that has been configured in the Resource Mapping.
+Additionally, you can set the `cache` option to either `always`, `never`,
+or `auto`, depending on your requirements.
+If you want virtio-fs to honor the `O_DIRECT` flag, you can set the
+`direct-io` parameter to `1`.
+Additionally it possible to overwrite the default mapping settings
+for xattr & acl by setting then to either `1` or `0`.
+
+The `acl` parameter automatically implies `xattr`, that is, it makes no
+difference whether you set xattr to `0` if acl is set to `1`.
+
+
+qm set  -virtiofs0 dirid=,cache=always,direct-io=1
+qm set  -virtiofs1 ,cache=never,xattr=1
+qm set  -virtiofs2 ,acl=1
+
+
+To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run 
the
+following command:
+
+The dirid associated with the path on the current node is also used as the
+mount tag (name used to mount the device on the guest).
+
+
+mount -t virtiofs  
+
+
+For more information on available virtiofsd parameters, see the
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
+
 [[qm_bootorder]]
 Device Boot Order
 ~
@@ -1603,8 +1682,9 @@ in the relevant tab in the `Resource Mappings` category, 
or on the cli with
 
 [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
 
-Where `` is the hardware type (currently either `pci` or `usb`) and
-`` are the device mappings and other configuration parameters.
+Where `` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `` are the device mappings and other
+configuration parameters.
 
 Note that the options must include a map property with all identifying
 properties of that hardware, so that it's possible to verify the hardware did
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH cluster v8 1/7] add mapping/dir.cfg for resource mapping

2023-11-08 Thread Markus Frank
Add it to both, the perl side (PVE/Cluster.pm) and pmxcfs side
(status.c).

Signed-off-by: Markus Frank 
---
 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index cfa2583..39bdfa1 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -82,6 +82,7 @@ my $observed = {
 'virtual-guest/cpu-models.conf' => 1,
 'mapping/pci.cfg' => 1,
 'mapping/usb.cfg' => 1,
+'mapping/dir.cfg' => 1,
 };
 
 sub prepare_observed_file_basedirs {
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index c8094ac..48ebc62 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -112,6 +112,7 @@ static memdb_change_t memdb_change_array[] = {
{ .path = "firewall/cluster.fw" },
{ .path = "mapping/pci.cfg" },
{ .path = "mapping/usb.cfg" },
+   { .path = "mapping/dir.cfg" },
 };
 
 static GMutex mutex;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v8 4/7] feature #1027: virtio-fs support

2023-11-08 Thread Markus Frank
add support for sharing directories with a guest vm

virtio-fs needs virtiofsd to be started.
In order to start virtiofsd as a process (despite being a daemon it is does not 
run
in the background), a double-fork is used.

virtiofsd should close itself together with qemu.

There are the parameters dirid
and the optional parameters direct-io & cache.
Additionally the xattr & acl parameter overwrite the
directory mapping settings for xattr & acl.

The dirid gets mapped to the path on the current node
and is also used as a mount-tag (name used to mount the
device on the guest).

example config:
```
virtiofs0: foo,direct-io=1,cache=always,acl=1
virtiofs1: dirid=bar,cache=never,xattr=1
```

For information on the optional parameters see there:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md

Signed-off-by: Markus Frank 
---
 PVE/QemuServer.pm| 185 +++
 PVE/QemuServer/Memory.pm |  25 --
 debian/control   |   1 +
 3 files changed, 205 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2895675..92580df 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -43,6 +43,7 @@ use PVE::PBSClient;
 use PVE::RESTEnvironment qw(log_warn);
 use PVE::RPCEnvironment;
 use PVE::Storage;
+use PVE::Mapping::Dir;
 use PVE::SysFSTools;
 use PVE::Systemd;
 use PVE::Tools qw(run_command file_read_firstline file_get_contents 
dir_glob_foreach get_host_arch $IPV6RE);
@@ -277,6 +278,42 @@ my $rng_fmt = {
 },
 };
 
+my $virtiofs_fmt = {
+'dirid' => {
+   type => 'string',
+   default_key => 1,
+   description => "Mapping identifier of the directory mapping to be"
+   ." shared with the guest. Also used as a mount tag inside the VM.",
+   format_description => 'mapping-id',
+   format => 'pve-configid',
+},
+'cache' => {
+   type => 'string',
+   description => "The caching policy the file system should use"
+   ." (auto, always, never).",
+   format_description => "virtiofs-cache",
+   enum => [qw(auto always never)],
+   optional => 1,
+},
+'direct-io' => {
+   type => 'boolean',
+   description => "Honor the O_DIRECT flag passed down by guest 
applications",
+   format_description => "virtiofs-directio",
+   optional => 1,
+},
+xattr => {
+   type => 'boolean',
+   description => "Enable support for extended attributes.",
+   optional => 1,
+},
+acl => {
+   type => 'boolean',
+   description => "Enable support for posix ACLs (implies --xattr).",
+   optional => 1,
+},
+};
+PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
+
 my $meta_info_fmt = {
 'ctime' => {
type => 'integer',
@@ -839,6 +876,7 @@ while (my ($k, $v) = each %$confdesc) {
 }
 
 my $MAX_NETS = 32;
+my $MAX_VIRTIOFS = 10;
 my $MAX_SERIAL_PORTS = 4;
 my $MAX_PARALLEL_PORTS = 3;
 
@@ -948,6 +986,21 @@ my $netdesc = {
 
 PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
 
+my $virtiofsdesc = {
+optional => 1,
+type => 'string', format => $virtiofs_fmt,
+description => "share files between host and guest",
+};
+PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
+
+sub max_virtiofs {
+return $MAX_VIRTIOFS;
+}
+
+for (my $i = 0; $i < $MAX_VIRTIOFS; $i++)  {
+$confdesc->{"virtiofs$i"} = $virtiofsdesc;
+}
+
 my $ipconfig_fmt = {
 ip => {
type => 'string',
@@ -4055,6 +4108,23 @@ sub config_to_command {
push @$devices, '-device', $netdevicefull;
 }
 
+my $virtiofs_enabled = 0;
+for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
+   my $opt = "virtiofs$i";
+
+   next if !$conf->{$opt};
+   my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+   next if !$virtiofs;
+
+   check_virtiofs_config ($conf, $virtiofs);
+
+   push @$devices, '-chardev', 
"socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
+   push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
+   .",chardev=virtfs$i,tag=$virtiofs->{dirid}";
+
+   $virtiofs_enabled = 1;
+}
+
 if ($conf->{ivshmem}) {
my $ivshmem = parse_property_string($ivshmem_fmt, $conf->{ivshmem});
 
@@ -4114,6 +4184,14 @@ sub config_to_command {
 }
 push @$machineFlags, "type=${machine_type_min}";
 
+if ($virtiofs_enabled && !$conf->{numa}) {
+   # kvm: '-machine memory-backend' and '-numa memdev' properties are
+   # mutually exclusive
+   push @$devices, '-object', 'memory-backend-memfd,id=virtiofs-mem'
+   .",size=$conf->{memory}M,share=on";
+   push @$machineFlags, 'memory-backend=virtiofs-mem';
+}
+
 push @$cmd, @$devices;
 push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
 push @$cmd, '-machine', join(',', @$machineFlags) if 
scalar(@$machineFlags);
@@ -4140,6 +4218,96 @@ sub config_to_command {
 return wantarray ? ($cmd, $vo

[pve-devel] [PATCH qemu-server v8 5/7] Permission check for virtiofs directory access

2023-11-08 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 PVE/API2/Qemu.pm  | 39 ++-
 PVE/QemuServer.pm |  5 -
 2 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c8a87f3..1c5eb4c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -650,6 +650,32 @@ my sub check_vm_create_hostpci_perm {
 return 1;
 };
 
+my sub check_dir_perm {
+my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
+
+return 1 if $authuser eq 'root@pam';
+
+$rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
+
+my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', 
$value);
+$rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", 
['Mapping.Use']);
+
+return 1;
+};
+
+my sub check_vm_create_dir_perm {
+my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
+
+return 1 if $authuser eq 'root@pam';
+
+foreach my $opt (keys %{$param}) {
+   next if $opt !~ m/^virtiofs\d+$/;
+   check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
+}
+
+return 1;
+};
+
 my $check_vm_modify_config_perm = sub {
 my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
 
@@ -660,7 +686,7 @@ my $check_vm_modify_config_perm = sub {
# else, as there the permission can be value dependend
next if PVE::QemuServer::is_valid_drivename($opt);
next if $opt eq 'cdrom';
-   next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/;
+   next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/;
next if $opt eq 'tags';
 
 
@@ -929,6 +955,7 @@ __PACKAGE__->register_method({
&$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, 
$param);
check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, 
$param);
+   check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param);
 
PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
&$check_cpu_model_access($rpcenv, $authuser, $param);
@@ -1790,6 +1817,10 @@ my $update_vm_api  = sub {
check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$val);
PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
PVE::QemuConfig->write_config($vmid, $conf);
+   } elsif ($opt =~ m/^virtiofs\d$/) {
+   check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$val);
+   PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
+   PVE::QemuConfig->write_config($vmid, $conf);
} elsif ($opt eq 'tags') {
assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser);
delete $conf->{$opt};
@@ -1869,6 +1900,12 @@ my $update_vm_api  = sub {
}
check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$param->{$opt});
$conf->{pending}->{$opt} = $param->{$opt};
+   } elsif ($opt =~ m/^virtiofs\d$/) {
+   if (my $oldvalue = $conf->{$opt}) {
+   check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$oldvalue);
+   }
+   check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$param->{$opt});
+   $conf->{pending}->{$opt} = $param->{$opt};
} elsif ($opt eq 'tags') {
assert_tag_permissions($vmid, $conf->{$opt}, 
$param->{$opt}, $rpcenv, $authuser);
$conf->{pending}->{$opt} = 
PVE::GuestHelpers::get_unique_tags($param->{$opt});
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 92580df..f66f26e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6643,7 +6643,10 @@ sub check_mapping_access {
} else {
die "either 'host' or 'mapping' must be set.\n";
}
-   }
+   } elsif ($opt =~ m/^virtiofs\d$/) {
+   my $virtiofs = 
PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+   $rpcenv->check_full($user, "/mapping/dir/$virtiofs->{dirid}", 
['Mapping.Use']);
+   }
}
 };
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v8 6/7] check_local_resources: virtiofs

2023-11-08 Thread Markus Frank
add dir mapping checks to check_local_resources

Signed-off-by: Markus Frank 
---
 PVE/QemuServer.pm| 10 +-
 test/MigrationTest/Shared.pm |  7 +++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f66f26e..b5c2c14 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2664,6 +2664,7 @@ sub check_local_resources {
 my $nodelist = PVE::Cluster::get_nodelist();
 my $pci_map = PVE::Mapping::PCI::config();
 my $usb_map = PVE::Mapping::USB::config();
+my $dir_map = PVE::Mapping::Dir::config();
 
 my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
 
@@ -2675,6 +2676,8 @@ sub check_local_resources {
$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, 
$node);
} elsif ($type eq 'usb') {
$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, 
$node);
+   } elsif ($type eq 'dir') {
+   $entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, 
$node);
}
if (!scalar($entry->@*)) {
push @{$missing_mappings_by_node->{$node}}, $key;
@@ -2703,9 +2706,14 @@ sub check_local_resources {
push @$mapped_res, $k;
}
}
+   if ($k =~ m/^virtiofs/) {
+   my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+   $add_missing_mapping->('dir', $k, $entry->{dirid});
+   push @$mapped_res, $k;
+   }
# sockets are safe: they will recreated be on the target side 
post-migrate
next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
-   push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
+   push @loc_res, $k if $k =~ 
m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
 }
 
 die "VM uses local resources\n" if scalar @loc_res && !$noerr;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index aa7203d..c5d0722 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -90,6 +90,13 @@ $mapping_pci_module->mock(
 },
 );
 
+our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
+$mapping_dir_module->mock(
+config => sub {
+   return {};
+},
+);
+
 our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
 $ha_config_module->mock(
 vm_is_ha_managed => sub {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config

2023-11-08 Thread Markus Frank
Adds a config file for directories by using a 'map'
array propertystring for each node mapping.

Next to node & path, there is the optional
submounts parameter in the map array.
Additionally there are the default settings for xattr & acl.

example config:
```
some-dir-id
map node=node1,path=/mnt/share/,submounts=1
map node=node2,path=/mnt/share/,
xattr 1
acl 1
```

Signed-off-by: Markus Frank 
---
 src/Makefile   |   1 +
 src/PVE/Mapping/Dir.pm | 177 +
 2 files changed, 178 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm

diff --git a/src/Makefile b/src/Makefile
index cbc40c1..2ebe08b 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -17,6 +17,7 @@ install: PVE
install -d ${PERL5DIR}/PVE/Mapping
install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
+   install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
install -d ${PERL5DIR}/PVE/VZDump
install -m 0644 PVE/VZDump/Plugin.pm ${PERL5DIR}/PVE/VZDump/
install -m 0644 PVE/VZDump/Common.pm ${PERL5DIR}/PVE/VZDump/
diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
new file mode 100644
index 000..6c87073
--- /dev/null
+++ b/src/PVE/Mapping/Dir.pm
@@ -0,0 +1,177 @@
+package PVE::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file 
cfs_write_file);
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::SectionConfig;
+use PVE::INotify;
+
+use base qw(PVE::SectionConfig);
+
+my $FILENAME = 'mapping/dir.cfg';
+
+cfs_register_file($FILENAME,
+  sub { __PACKAGE__->parse_config(@_); },
+  sub { __PACKAGE__->write_config(@_); });
+
+
+# so we don't have to repeat the type every time
+sub parse_section_header {
+my ($class, $line) = @_;
+
+if ($line =~ m/^(\S+)\s*$/) {
+   my $id = $1;
+   my $errmsg = undef; # set if you want to skip whole section
+   eval { PVE::JSONSchema::pve_verify_configid($id) };
+   $errmsg = $@ if $@;
+   my $config = {}; # to return additional attributes
+   return ('dir', $id, $errmsg, $config);
+}
+return undef;
+}
+
+sub format_section_header {
+my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
+
+return "$sectionId\n";
+}
+
+sub type {
+return 'dir';
+}
+
+my $map_fmt = {
+node => get_standard_option('pve-node'),
+path => {
+   description => "Directory-path that should be shared with the guest.",
+   type => 'string',
+   format => 'pve-storage-path',
+},
+submounts => {
+   type => 'boolean',
+   description => "Option to tell the guest which directories are mount 
points.",
+   optional => 1,
+},
+description => {
+   description => "Description of the node specific directory.",
+   type => 'string',
+   optional => 1,
+   maxLength => 4096,
+},
+};
+
+my $defaultData = {
+propertyList => {
+id => {
+type => 'string',
+description => "The ID of the directory",
+format => 'pve-configid',
+},
+description => {
+description => "Description of the directory",
+type => 'string',
+optional => 1,
+maxLength => 4096,
+},
+map => {
+type => 'array',
+description => 'A list of maps for the cluster nodes.',
+   optional => 1,
+items => {
+type => 'string',
+format => $map_fmt,
+},
+},
+   xattr => {
+   type => 'boolean',
+   description => "Enable support for extended attributes.",
+   optional => 1,
+   },
+   acl => {
+   type => 'boolean',
+   description => "Enable support for posix ACLs (implies --xattr).",
+   optional => 1,
+   },
+},
+};
+
+sub private {
+return $defaultData;
+}
+
+sub options {
+return {
+description => { optional => 1 },
+map => {},
+xattr => { optional => 1 },
+acl => { optional => 1 },
+};
+}
+
+sub assert_valid {
+my ($dir_cfg) = @_;
+
+my $path = $dir_cfg->{path};
+
+if (! -e $path) {
+die "Path $path does not exist\n";
+}
+if ((-e $path) && (! -d $path)) {
+die "Path $path exists but is not a directory\n"
+}
+
+return 1;
+};
+
+sub config {
+return cfs_read_file($FILENAME);
+}
+
+sub lock_dir_config {
+my ($code, $errmsg) = @_;
+
+cfs_lock_file($FILENAME, undef, $code);
+my $err = $@;
+if ($err) {
+$errmsg ? die "$errmsg: $err" : die $err;
+}
+}
+
+sub write_dir_config {
+my ($cfg) = @_;
+
+cfs_write_file($FILENAME, $cfg);
+}
+
+sub find_on_current_node {
+my ($id) = @_;
+
+my $cfg = config();
+my $node = PVE::INotify::nodename();
+
+ 

[pve-devel] [PATCH manager v8 7/7] api: add resource map api endpoints for directories

2023-11-08 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 PVE/API2/Cluster/Mapping.pm   |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm   | 309 ++
 PVE/API2/Cluster/Mapping/Makefile |   3 +-
 3 files changed, 318 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm

diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
index 40386579..c5993208 100644
--- a/PVE/API2/Cluster/Mapping.pm
+++ b/PVE/API2/Cluster/Mapping.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use PVE::API2::Cluster::Mapping::PCI;
 use PVE::API2::Cluster::Mapping::USB;
+use PVE::API2::Cluster::Mapping::Dir;
 
 use base qw(PVE::RESTHandler);
 
@@ -18,6 +19,11 @@ __PACKAGE__->register_method ({
 path => 'usb',
 });
 
+__PACKAGE__->register_method ({
+subclass => "PVE::API2::Cluster::Mapping::Dir",
+path => 'dir',
+});
+
 __PACKAGE__->register_method ({
 name => 'index',
 path => '',
@@ -43,6 +49,7 @@ __PACKAGE__->register_method ({
my $result = [
{ name => 'pci' },
{ name => 'usb' },
+   { name => 'dir' },
];
 
return $result;
diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
new file mode 100644
index ..f6e8f26f
--- /dev/null
+++ b/PVE/API2/Cluster/Mapping/Dir.pm
@@ -0,0 +1,309 @@
+package PVE::API2::Cluster::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use Storable qw(dclone);
+
+use PVE::Mapping::Dir ();
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+name => 'index',
+path => '',
+method => 'GET',
+# only proxy if we give the 'check-node' parameter
+proxyto_callback => sub {
+   my ($rpcenv, $proxyto, $param) = @_;
+   return $param->{'check-node'} // 'localhost';
+},
+description => "List Dir Hardware Mapping",
+permissions => {
+   description => "Only lists entries where you have 'Mapping.Modify', 
'Mapping.Use' or".
+   " 'Mapping.Audit' permissions on '/mapping/dir/'.",
+   user => 'all',
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   'check-node' => get_standard_option('pve-node', {
+   description => "If given, checks the configurations on the 
given node for ".
+   "correctness, and adds relevant diagnostics for the devices 
to the response.",
+   optional => 1,
+   }),
+   },
+},
+returns => {
+   type => 'array',
+   items => {
+   type => "object",
+   properties => {
+   id => {
+   type => 'string',
+   description => "The logical ID of the mapping."
+   },
+   map => {
+   type => 'array',
+   description => "The entries of the mapping.",
+   items => {
+   type => 'string',
+   description => "A mapping for a node.",
+   },
+   },
+   description => {
+   type => 'string',
+   description => "A description of the logical mapping.",
+   },
+   xattr => {
+   type => 'boolean',
+   description => "Enable support for extended attributes.",
+   optional => 1,
+   },
+   acl => {
+   type => 'boolean',
+   description => "Enable support for posix ACLs (implies 
--xattr).",
+   optional => 1,
+   },
+   checks => {
+   type => "array",
+   optional => 1,
+   description => "A list of checks, only present if 
'check_node' is set.",
+   items => {
+   type => 'object',
+   properties => {
+   severity => {
+   type => "string",
+   enum => ['warning', 'error'],
+   description => "The severity of the error",
+   },
+   message => {
+   type => "string",
+   description => "The message of the error",
+   },
+   },
+   }
+   },
+   },
+   },
+   links => [ { rel => 'child', href => "{id}" } ],
+},
+code => sub {
+   my ($param) = @_;
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+
+   my $check_node = $param->{'check-node'};
+   my $local_node = PVE::INotify::nodename();
+
+   die "wrong node to check - $check_node != $local_node\n"
+   if defined($check_node) && $check_node ne 'localhost' && 
$check_node ne $local_node;
+
+   

Re: [pve-devel] applied: [PATCH installer v4 0/6] fix #4829: wire up `arc_max` ZFS option to GUI/TUI for PVE

2023-11-08 Thread Christoph Heiss


On Tue, Nov 07, 2023 at 04:50:31PM +0100, Thomas Lamprecht wrote:
>
> Am 07/11/2023 um 13:20 schrieb Christoph Heiss:
> >   run env: remove debug print
> >   install: use correct variable names in zfs_setup_module_conf()
> >   proxinstall: expose `arc_max` ZFS option for PVE installations
> >   test: add tests for zfs_arc_max calculations
> >   common: add ZFS `arc_max` installer setup option
> >   tui: bootdisk: expose `arc_max` ZFS option for PVE installations
> >
>
> applied series, thanks!

Thanks!

>
> Two thoughts though:
>
> - we could add the ARC input field for PBS and PMG too, but not use the
>   automatic-sizing heuristic there (i.e., still 50% memory by default)

Makes sense, esp. for PBS due to its nature. I'll look into it sometime!

>
> - maybe GB with float would fit in better, but surely not perfect either,
>   so just for the record.

Good point, but I agree, hard to judge what's better. As we are
defaulting to max. 16 GiB (for PVE), it's a rather smaller range, so MiB
makes sense IMO.
Although aligning it with all the other inputs, which are in GiB, makes
sense too. As well as inputting it as GiB float should be precise enough
for any case.
I will definitely keep it in mind for the future.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs] notifications: update docs to for matcher-based notifications

2023-11-08 Thread Lukas Wagner
Target groups and filters have been replaced by notification matchers.
The matcher can match on certain notification properties and route
the notification to a target in case of a match.

This patch updates the docs to reflect these changes.

Signed-off-by: Lukas Wagner 
---
 notifications.adoc | 254 +++--
 1 file changed, 174 insertions(+), 80 deletions(-)

diff --git a/notifications.adoc b/notifications.adoc
index c4d2931..764ec72 100644
--- a/notifications.adoc
+++ b/notifications.adoc
@@ -5,45 +5,40 @@ ifndef::manvolnum[]
 :pve-toplevel:
 endif::manvolnum[]
 
-[[notification_events]]
-Notification Events

-
-{pve} will attempt to notify system administrators in case of certain events,
-such as:
-
-[width="100%",options="header"]
-|===
-| Event name| Description | Severity
-| `package-updates` | System updates are available| `info`
-| `fencing` | The {pve} HA manager has fenced a node  | `error`
-| `replication` | A storage replication job has failed| `error`
-| `vzdump`  | vzdump backup finished  | `info` 
(`error` on failure)
-|===
-
-In the 'Notification' panel of the datacenter view, the system's behavior can 
be
-configured for all events except backup jobs. For backup jobs,
-the settings can be found in the respective backup job configuration.
-For every notification event there is an option to configure the notification
-behavior (*when* to send a notification) and the notification target (*where* 
to
-send the notification).
-
-
-See also:
-
-* xref:datacenter_configuration_file[Datacenter Configuration]
-* xref:datacenter_configuration_file[vzdump]
+Overview
+
+
+{pve} will send notifications if case of noteworthy events in the system.
+
+There are a number of different xref:notification_events[notification events],
+each with their own set of metadata fields that can be used in
+notification matchers.
+
+A xref:notification_matchers[notification matcher] determines
+_which_ notifications shall be sent _where_.
+A matcher has _match rules_, that can be used to
+match on certain notification properties (e.g. timestamp, severity,
+metadata fields).
+If a matcher matches a notification, the notification will be routed
+to a configurable set of notification targets.
+
+A xref:notification_targets[notification target] is an abstraction for a
+destination where a notification should be sent to - for instance,
+a Gotify server instance, or a set of email addresses.
+There are multiple types of notification targets, including
+`sendmail`, which uses the system's sendmail command to send emails,
+or `gotify`, which sends a notification to a Gotify instance.
+
+The notification system can be configured in the GUI under
+Datacenter -> Notifications. The configuration is stored in
+`/etc/pve/notifications.cfg` and `/etc/pve/priv/notifications.cfg` -
+the latter contains sensitive configuration options such as
+passwords or authentication tokens for notification targets.
 
 [[notification_targets]]
 Notification Targets
 
 
-Notification targets can be configured in the 'Notification Targets' panel.
-
-NOTE: The `mail-to-root` target is always available and cannot be modified or
-removed. It sends a mail the `root@pam` user by using the `sendmail` command 
and
-serves as a fallback target if no other target is configured for an event.
-
 Sendmail
 
 The sendmail binary is a program commonly found on Unix-like operating systems
@@ -73,7 +68,15 @@ set, the plugin will fall back to the `email_from` setting 
from
 `datacenter.cfg`. If that is also not set, the plugin will default to
 `root@$hostname`, where `$hostname` is the hostname of the node.
 
-* `filter`: The name of the filter to use for this target.
+Example configuration (`/etc/pve/notifications.cfg`):
+
+sendmail: example
+mailto-user root@pam
+mailto-user admin@pve
+mailto m...@example.com
+from-address p...@example.com
+comment Send to multiple users/addresses
+
 
 Gotify
 ~~
@@ -88,72 +91,163 @@ The configuration for Gotify target plugins has the 
following options:
 * `server`: The base URL of the Gotify server, e.g. `http://:`
 * `token`: The authentication token. Tokens can be generated within the Gotify
 web interface.
-* `filter`: The name of the filter to use for this target.
 
 NOTE: The Gotify target plugin will respect the HTTP proxy settings from the
  xref:datacenter_configuration_file[datacenter configuration]
 
-Group
-~
+Example configuration (`/etc/pve/notifications.cfg`):
+
+gotify: example
+server http://gotify.example.com:
+comment Send to multiple users/addresses
+
 
-One can only select a single target for notification 

Re: [pve-devel] [PATCH v2 qemu-server] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-08 Thread Fiona Ebner
Am 08.11.23 um 09:51 schrieb Hannes Duerr:
> adds vendor and product information for SCSI devices to the json schema and
> checks in the VM create/update API call if it is possible to add these to 
> QEMU as a device option
> 
> Signed-off-by: Hannes Duerr 
> ---
> 
> changes in v2:
> - when calling the API to create/update a VM, check whether the devices
> are "scsi-hd" or "scsi-cd" devices,where there is the option to add
> vendor and product information, if not error out
> - change the format in product_fmt and vendor_fmt to a pattern that only
> allows 40 characters consisting of upper and lower case letters, numbers and 
> '-' and '_'.
> 
>  PVE/API2/Qemu.pm|  9 +
>  PVE/QemuServer.pm   | 83 +
>  PVE/QemuServer/Drive.pm | 24 
>  3 files changed, 92 insertions(+), 24 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 38bdaab..6898ec9 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -1030,6 +1030,11 @@ __PACKAGE__->register_method({
>   );
>   $conf->{$_} = $created_opts->{$_} for keys 
> $created_opts->%*;
>  
> + foreach my $opt (keys $created_opts->%*) {

Style nit: new code should use for instead of foreach

Maybe sort the keys to have a deterministic order.

> + if ($opt =~ m/scsi/) {
> + 
> PVE::QemuServer::check_scsi_feature_compatibility($opt, $created_opts, $conf, 
> $storecfg, $param);


Style nit: line too long (100 characters is our limit)

Note that $created_opts was already merged into $conf two lines above.
I'd rather not introduce new usage of that variable.

Can we do the check before creating the drive instead? We know if it's a
CD or pass-through and the path or if it's iscsi ahead of time and that
should be enough for the check, or what am I missing?

> + }
> + }
>   if (!$conf->{boot}) {
>   my $devs = 
> PVE::QemuServer::get_default_bootdevices($conf);
>   $conf->{boot} = PVE::QemuServer::print_bootorder($devs);
> @@ -1840,6 +1845,10 @@ my $update_vm_api  = sub {
>   );
>   $conf->{pending}->{$_} = $created_opts->{$_} for keys 
> $created_opts->%*;
>  
> + if ($opt =~ m/scsi/) {
> + PVE::QemuServer::check_scsi_feature_compatibility($opt, 
> $created_opts, $conf, $storecfg, $param);
> + }
> +
>   # default legacy boot order implies all cdroms anyway
>   if (@bootorder) {
>   # append new CD drives to bootorder to mark them 
> bootable
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index dbcd568..919728b 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -26,6 +26,7 @@ use Storable qw(dclone);
>  use Time::HiRes qw(gettimeofday usleep);
>  use URI::Escape;
>  use UUID;
> +use Data::Dumper;
>  

Left-over from debugging ;)? You can also use
use JSON;
print to_json($whatever, { canonical => 1, pretty => 1 });
which is often nicer IMHO

>  use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file);
>  use PVE::CGroup;
> @@ -1428,6 +1429,53 @@ my sub get_drive_id {
>  return "$drive->{interface}$drive->{index}";
>  }
>  

Can you please split the patch in two? A preparatory ones for creating
this helper and another one for the feature.

I also think the helper would be nicer in the QemuServer/Drive.pm
module, but would need more preparation first to avoid a cyclic
dependency. That is:
* change the function to take $machine_version instead of $machine_type
* move the path_is_scsi and the scsi_inquiry function called by that
also to the drive module.

> +sub get_scsi_devicetype {
> +my ($drive, $storecfg, $machine_type) = @_;

It's unfortunate that this needs both $storecfg and $machine_type just
for that compatibility check for the rather old machine version. But I
guess there's not much we can do at the moment.

> +
> +my $devicetype = 'hd';
> +my $path = '';
> +if (drive_is_cdrom($drive)) {
> + $devicetype = 'cd';
> +} else {
> + if ($drive->{file} =~ m|^/|) {
> + $path = $drive->{file};
> + if (my $info = path_is_scsi($path)) {
> + if ($info->{type} == 0 && $drive->{scsiblock}) {
> + $devicetype = 'block';
> + } elsif ($info->{type} == 1) { # tape
> + $devicetype = 'generic';
> + }
> + }
> + } else {
> + $path = PVE::Storage::path($storecfg, $drive->{file});
> + }
> +
> + # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
> + my $version = kvm_user_version();
> + $version = extract_version($machine_type, $version);

Why was this changed during the move? It's one line more now. Also, this
can be changed

> + if ($path =~ m/^iscsi\:\/\// &&
> +!min_version($version,

[pve-devel] [PATCH access-control] perms: fix wrong /pools entry in default set of ACL paths

2023-11-08 Thread Fabian Grünbichler
/pools is not an allowed ACL path, so this would add a bogus entry into the
effective permissions in case something got propagated from /.

Signed-off-by: Fabian Grünbichler 
---
stumbled upon this while working on unrelated pool stuff..

 src/PVE/RPCEnvironment.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/RPCEnvironment.pm b/src/PVE/RPCEnvironment.pm
index 5eb339a..646a9b9 100644
--- a/src/PVE/RPCEnvironment.pm
+++ b/src/PVE/RPCEnvironment.pm
@@ -242,7 +242,7 @@ sub get_effective_permissions {
'/access' => 1,
'/access/groups' => 1,
'/nodes' => 1,
-   '/pools' => 1,
+   '/pool' => 1,
'/sdn' => 1,
'/storage' => 1,
'/vms' => 1,
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 container 1/1] Add device passthrough

2023-11-08 Thread Dominik Csapak

did not properly review this, but what caught my attention was that you
don't define any permissions for this new property?

by default new options in pve-container only need 'VM.Config.Options' but
IMHO this should be root only for now? (unless we can use mappings
where we can use those permissions)




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH firewall/network 0/2] SDN: Create firewall aliases for SDN subnets

2023-11-08 Thread Stefan Lendl


Creates a cluster-way firewall alias when creating an SDN subnet

Including Firewall from pve-network introduces a dependency cycle which the
patch to Firewall.pm elimiates.



pve-firewall:

Stefan Lendl (1):
  Manually construct guest config path

 src/PVE/Firewall.pm | 34 +++---
 1 file changed, 7 insertions(+), 27 deletions(-)


pve-network:

Stefan Lendl (1):
  Create a cluster-wide firewall for SDN subnets

 src/PVE/Network/SDN/Subnets.pm | 18 ++
 1 file changed, 18 insertions(+)


Summary over all repositories:
  2 files changed, 25 insertions(+), 27 deletions(-)

-- 
murpp v0.4.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-network 2/2] Create a cluster-wide firewall for SDN subnets

2023-11-08 Thread Stefan Lendl
Upon creation of a subnet, we create a cluster-wide firewall alias.

Signed-off-by: Stefan Lendl 
---

Notes:
Creates the alias directly when the Subnet is created.

Other SDN objects are created upon 'Apply': commit_config().
Although, IPAM creates the subnet right away as well.
This should not be an issue but is inconsistent.

 src/PVE/Network/SDN/Subnets.pm | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/src/PVE/Network/SDN/Subnets.pm b/src/PVE/Network/SDN/Subnets.pm
index 6bb42e5..fe67abd 100644
--- a/src/PVE/Network/SDN/Subnets.pm
+++ b/src/PVE/Network/SDN/Subnets.pm
@@ -6,6 +6,7 @@ use warnings;
 use Net::Subnet qw(subnet_matcher);
 use Net::IP;
 use NetAddr::IP qw(:lower);
+use PVE::API2::Firewall::Aliases;
 
 use PVE::Cluster qw(cfs_read_file cfs_write_file cfs_lock_file);
 use PVE::Network::SDN::Dns;
@@ -161,6 +162,13 @@ sub del_dns_ptr_record {
 $plugin->del_ptr_record($plugin_config, $reversezone, $ip);
 }
 
+sub get_fw_alias_name {
+my ($subnet) = @_;
+my $cidr = $subnet->{cidr};
+$cidr =~ tr/.\//-/;
+return "$subnet->{zone}_$subnet->{vnet}_$cidr";
+}
+
 sub add_subnet {
 my ($zone, $subnetid, $subnet) = @_;
 
@@ -170,6 +178,13 @@ sub add_subnet {
 my $plugin_config = $ipam_cfg->{ids}->{$ipam};
 my $plugin = 
PVE::Network::SDN::Ipams::Plugin->lookup($plugin_config->{type});
 $plugin->add_subnet($plugin_config, $subnetid, $subnet);
+
+my $param = {
+   name => get_fw_alias_name($subnet),
+   cidr => $subnet->{cidr},
+   comment => "Automatically created Alias from SDN => Zone: 
$subnet->{zone}, VNet: $subnet->{vnet}, Subnet: $subnet->{cidr}"
+};
+PVE::API2::Firewall::ClusterAliases->create_alias($param);
 }
 
 sub del_subnet {
@@ -181,6 +196,9 @@ sub del_subnet {
 my $plugin_config = $ipam_cfg->{ids}->{$ipam};
 my $plugin = 
PVE::Network::SDN::Ipams::Plugin->lookup($plugin_config->{type});
 $plugin->del_subnet($plugin_config, $subnetid, $subnet);
+
+my $param = { name => get_fw_alias_name($subnet) };
+PVE::API2::Firewall::ClusterAliases->remove_alias($param);
 }
 
 sub next_free_ip {
-- 
2.41.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-firewall 1/2] Manually construct guest config path

2023-11-08 Thread Stefan Lendl
Remove require QemuConfig from Firewall.pm
We only use it to construct the guest config paths.
Fixes circular include when accessing Firewall::Aliases from
pve-network.

Signed-off-by: Stefan Lendl 
---
 src/PVE/Firewall.pm | 34 +++---
 1 file changed, 7 insertions(+), 27 deletions(-)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 77cbaf4..4db4b0c 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -28,22 +28,6 @@ use PVE::Firewall::Helpers;
 
 my $pvefw_conf_dir = "/etc/pve/firewall";
 my $clusterfw_conf_filename = "$pvefw_conf_dir/cluster.fw";
-
-# dynamically include PVE::QemuServer and PVE::LXC
-# to avoid dependency problems
-my $have_qemu_server;
-eval {
-require PVE::QemuServer;
-require PVE::QemuConfig;
-$have_qemu_server = 1;
-};
-
-my $have_lxc;
-eval {
-require PVE::LXC;
-$have_lxc = 1;
-};
-
 my $pve_fw_status_dir = "/var/lib/pve-firewall";
 
 mkdir $pve_fw_status_dir; # make sure this exists
@@ -3257,18 +3241,14 @@ sub read_local_vm_config {
next if !$d->{node} || $d->{node} ne $nodename;
next if !$d->{type};
if ($d->{type} eq 'qemu') {
-   if ($have_qemu_server) {
-   my $cfspath = PVE::QemuConfig->cfs_config_path($vmid);
-   if (my $conf = PVE::Cluster::cfs_read_file($cfspath)) {
-   $qemu->{$vmid} = $conf;
-   }
+   my $cfspath = "nodes/$nodename/qemu-server/$vmid.conf";
+   if (my $conf = PVE::Cluster::cfs_read_file($cfspath)) {
+   $qemu->{$vmid} = $conf;
}
-} elsif ($d->{type} eq 'lxc') {
-   if ($have_lxc) {
-   my $cfspath = PVE::LXC::Config->cfs_config_path($vmid);
-   if (my $conf = PVE::Cluster::cfs_read_file($cfspath)) {
-   $lxc->{$vmid} = $conf;
-   }
+   } elsif ($d->{type} eq 'lxc') {
+   my $cfspath = "nodes/$nodename/lxc/$vmid.conf";
+   if (my $conf = PVE::Cluster::cfs_read_file($cfspath)) {
+   $lxc->{$vmid} = $conf;
}
}
 }
-- 
2.41.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH access-control] perms: fix wrong /pools entry in default set of ACL paths

2023-11-08 Thread Thomas Lamprecht
Am 08/11/2023 um 11:29 schrieb Fabian Grünbichler:
> /pools is not an allowed ACL path, so this would add a bogus entry into the
> effective permissions in case something got propagated from /.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
> stumbled upon this while working on unrelated pool stuff..
> 
>  src/PVE/RPCEnvironment.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] api: osd: destroy: remove mclock max iops settings

2023-11-08 Thread Aaron Lauterer
Ceph does a quick benchmark when creating a new OSD and stores the
osd_mclock_max_capacity_iops_{ssd,hdd} settings in the config DB.

When destroying the OSD, Ceph does not automatically remove these
settings. Keeping them can be problematic if a new OSD with potentially
more performance is added and ends up getting the same OSD ID.

Therefore, we remove these settings ourselves when destroying an OSD.
Removing both variants, hdd and ssd should be fine, as the MON does not
complain if the setting does not exist.

Signed-off-by: Aaron Lauterer 
---
 PVE/API2/Ceph/OSD.pm | 4 
 1 file changed, 4 insertions(+)

diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index 0c07e7ce..2893456a 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -985,6 +985,10 @@ __PACKAGE__->register_method ({
print "Remove OSD $osdsection\n";
$rados->mon_command({ prefix => "osd rm", ids => [ $osdsection ], 
format => 'plain' });
 
+   print "Remove $osdsection mclock max capacity iops settings from 
config\n";
+   $rados->mon_command({ prefix => "config rm", who => $osdsection, 
name => 'osd_mclock_max_capacity_iops_ssd' });
+   $rados->mon_command({ prefix => "config rm", who => $osdsection, 
name => 'osd_mclock_max_capacity_iops_hdd' });
+
# try to unmount from standard mount point
my $mountpoint = "/var/lib/ceph/osd/ceph-$osdid";
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH access-control] acl: add missing SDN ACL paths to allowed list

2023-11-08 Thread Thomas Lamprecht
Am 08/11/2023 um 07:55 schrieb Fabian Grünbichler:
> else it's not actually possible to define ACLs on them, which means they are
> effectively root only instead of allowing their intended permission scheme.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
>  src/PVE/AccessControl.pm | 6 ++
>  1 file changed, 6 insertions(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 1/2] ui: BulkActions: rework filters and include tags

2023-11-08 Thread Dominik Csapak

On 11/6/23 17:01, Thomas Lamprecht wrote:

for the commit subject please: s/BulkActions/bulk actions/

Am 30/10/2023 um 13:58 schrieb Dominik Csapak:

This moves the filters out of the grid header for the BulkActions and
puts them into their own fieldset above the grid. With that, we can
easily include a tags filter (one include and one exclude list).

The filter fieldset is collapsible and shows the active filters in
parenthesis. aside from that the filter should be the same as before.



basic tests seem to work, and I did not really check the code closely,
so only a higher level review, and some stuff is even pre-existing (but
sticks a bit more out now):

- the CT/VM ID filter is a bit odd, especially if tuned to match all,
   not only parts of the VMID (which would not be *that* much better
   either IMO), when I want to migrate/start/stop a single VM I can just
   do so, no need for opening the bulk actions.


counter argument:

if i want to migrate/start/stop a specific list of vmid it may be faster
if i go to the bulk window, search vmid -> select, search vmid -> select, etc.
than go to vm -> click stop -> go to vm -> click stop

but no hard feelings, if you want i'll remove it



- The migration one should move target and jobs into two columns, as of
   now there's just to much vertical space used.

- Maybe we can also merge the "allow local disk" check box and the
   warning for "ct will use restart mode" into columns (the former could
   loose the box label, that note is not really that useful anyway)



yeah i'll try those two


- In the spirit of the last two points, the shutdown action might also
   benefit from having force-stop and timeout on the same row in two
   columns



make sense


- We have a mix of title case and sentence case for the fields, they
   should all use title case (I find https://titlecaseconverter.com/
   nice). E.g., should be "HA Status" in the filters, and Parallel Jobs
   for migration, and so on.


sry for that, i'll fix it



- tag space is very limited, maybe default to using circles or even
   dense (I'm still wishing a bit for seeing the tag value on hover, like
   a tool tip), or re-use the tree-style shape.
   One alternative might be to add vertical scrolling here, but that is
   probably rather odd (and not sure if that would even work easily
   here).


scrolling in two dimensions for different containers is always a bit weird
imho, but yeah making the tags 'dense' 'circle' or reusing the treestyle makes 
sense

i agree with the tooltip, but last time i looked at it that was not so easy
because we already have a tooltip in the tree (and we reuse the rendering
for that) but i could see if i could add a tooltip to the whole cell
here that prints all tags in a nice way, what do you think?



- disallowing multi-select for Type (maybe better labeled "Guest Type"?)
   might improve UX, as if there are only two choices anyway a "All",
   "VM" "CT" selection might be simpler – but no hard feelings here.


ah yeah, i'm not really sure why i did it this way, dropdown
with 3 distinct values makes much more sense here...


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 1/2] ui: BulkActions: rework filters and include tags

2023-11-08 Thread Thomas Lamprecht
Am 08/11/2023 um 13:14 schrieb Dominik Csapak:
> On 11/6/23 17:01, Thomas Lamprecht wrote:
>> - the CT/VM ID filter is a bit odd, especially if tuned to match all,
>>not only parts of the VMID (which would not be *that* much better
>>either IMO), when I want to migrate/start/stop a single VM I can just
>>do so, no need for opening the bulk actions.
> 
> counter argument:
> 
> if i want to migrate/start/stop a specific list of vmid it may be faster
> if i go to the bulk window, search vmid -> select, search vmid -> select, etc.
> than go to vm -> click stop -> go to vm -> click stop

It's faster to use the context menu of the tree or the one from the
global search though.
But faster is IMO the wrong thing to look at here anyway, as using
bulk actions for a single action feels always wrong conceptually,
so devfinitively won't be a common path for users.
If, there would need ot be a situation when someone wants to do a bulk
action, but circumstances change midway, after they opened the dialog,
so that they only need to stop/start/... a single guest now.
And even then they can still do that without the extra field (deselect
all, select single guest).

So IMO adding a whole field for this niche use case seems rather
overkill to me.

>> - tag space is very limited, maybe default to using circles or even
>>dense (I'm still wishing a bit for seeing the tag value on hover, like
>>a tool tip), or re-use the tree-style shape.
>>One alternative might be to add vertical scrolling here, but that is
>>probably rather odd (and not sure if that would even work easily
>>here).
> 
> scrolling in two dimensions for different containers is always a bit weird
> imho, but yeah making the tags 'dense' 'circle' or reusing the treestyle 
> makes sense

Yeah, additional scrolling is indeed not the nicest thing to add here.

> 
> i agree with the tooltip, but last time i looked at it that was not so easy
> because we already have a tooltip in the tree (and we reuse the rendering
> for that) but i could see if i could add a tooltip to the whole cell
> here that prints all tags in a nice way, what do you think?

I mean, for here there would not be a tool tip yet and we could have
slightly different behavior here than in the tree any way.

But for the tree: How about reducing the scope of the current status
tooltip from the whole row to the icon + text part only, so tags can
have a separate, per tag one.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager 1/2] ui: BulkActions: rework filters and include tags

2023-11-08 Thread Dominik Csapak

On 11/8/23 13:38, Thomas Lamprecht wrote:

Am 08/11/2023 um 13:14 schrieb Dominik Csapak:

On 11/6/23 17:01, Thomas Lamprecht wrote:

- the CT/VM ID filter is a bit odd, especially if tuned to match all,
not only parts of the VMID (which would not be *that* much better
either IMO), when I want to migrate/start/stop a single VM I can just
do so, no need for opening the bulk actions.


counter argument:

if i want to migrate/start/stop a specific list of vmid it may be faster
if i go to the bulk window, search vmid -> select, search vmid -> select, etc.
than go to vm -> click stop -> go to vm -> click stop


It's faster to use the context menu of the tree or the one from the
global search though.
But faster is IMO the wrong thing to look at here anyway, as using
bulk actions for a single action feels always wrong conceptually,
so devfinitively won't be a common path for users.
If, there would need ot be a situation when someone wants to do a bulk
action, but circumstances change midway, after they opened the dialog,
so that they only need to stop/start/... a single guest now.
And even then they can still do that without the extra field (deselect
all, select single guest).

So IMO adding a whole field for this niche use case seems rather
overkill to me.


ok, i'll remove it




- tag space is very limited, maybe default to using circles or even
dense (I'm still wishing a bit for seeing the tag value on hover, like
a tool tip), or re-use the tree-style shape.
One alternative might be to add vertical scrolling here, but that is
probably rather odd (and not sure if that would even work easily
here).


scrolling in two dimensions for different containers is always a bit weird
imho, but yeah making the tags 'dense' 'circle' or reusing the treestyle makes 
sense


Yeah, additional scrolling is indeed not the nicest thing to add here.



i agree with the tooltip, but last time i looked at it that was not so easy
because we already have a tooltip in the tree (and we reuse the rendering
for that) but i could see if i could add a tooltip to the whole cell
here that prints all tags in a nice way, what do you think?


I mean, for here there would not be a tool tip yet and we could have
slightly different behavior here than in the tree any way.

But for the tree: How about reducing the scope of the current status
tooltip from the whole row to the icon + text part only, so tags can
have a separate, per tag one.


sadly including the icon is only possible if we overwrite the whole template of 
the treecolumn
here, since the icon div is seperated from the text div without a possibility
to set additional data there

(you can check out the template here: 
https://docs.sencha.com/extjs/7.0.0/classic/src/Column.js-2.html maybe you see a way to do this easier?)



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH i18n] german translation for the bulk action log messages

2023-11-08 Thread Folke Gleumes
---
 de.po | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/de.po b/de.po
index 3e2ff91..43f5d0c 100644
--- a/de.po
+++ b/de.po
@@ -1138,17 +1138,17 @@ msgstr "Massenstart"
 #: pve-manager/www/manager6/Utils.js:1970
 #, fuzzy
 msgid "Bulk migrate VMs and Containers"
-msgstr "Migriere alle VMs und Container"
+msgstr "Massenmigration von VMs und Container"
 
 #: pve-manager/www/manager6/Utils.js:1998
 #, fuzzy
 msgid "Bulk shutdown VMs and Containers"
-msgstr "Stoppe alle VMs und Container"
+msgstr "Massenabschaltung von VMs und Container"
 
 #: pve-manager/www/manager6/Utils.js:1997
 #, fuzzy
 msgid "Bulk start VMs and Containers"
-msgstr "Starte alle VMs und Container"
+msgstr "Massenstart von VMs und Container"
 
 #: proxmox-backup/www/config/TrafficControlView.js:159
 #: proxmox-backup/www/window/TrafficControlEdit.js:263
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH i18n] german translation for the bulk action log messages

2023-11-08 Thread Thomas Lamprecht
Am 08/11/2023 um 14:51 schrieb Folke Gleumes:
> ---
>  de.po | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
>

applied, but you obviously did not test this, as leaving the `# fuzzy`
comments in will make gettext ignore those translations completely.
I really expect some basic testing to be done before sending patches.



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 qemu-server] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-08 Thread Hannes Dürr


On 11/8/23 11:04, Fiona Ebner wrote:

Am 08.11.23 um 09:51 schrieb Hannes Duerr:

adds vendor and product information for SCSI devices to the json schema and
checks in the VM create/update API call if it is possible to add these to QEMU 
as a device option

Signed-off-by: Hannes Duerr 
---

changes in v2:
- when calling the API to create/update a VM, check whether the devices
are "scsi-hd" or "scsi-cd" devices,where there is the option to add
vendor and product information, if not error out
- change the format in product_fmt and vendor_fmt to a pattern that only
allows 40 characters consisting of upper and lower case letters, numbers and 
'-' and '_'.

  PVE/API2/Qemu.pm|  9 +
  PVE/QemuServer.pm   | 83 +
  PVE/QemuServer/Drive.pm | 24 
  3 files changed, 92 insertions(+), 24 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 38bdaab..6898ec9 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1030,6 +1030,11 @@ __PACKAGE__->register_method({
);
$conf->{$_} = $created_opts->{$_} for keys 
$created_opts->%*;
  
+		foreach my $opt (keys $created_opts->%*) {

Style nit: new code should use for instead of foreach

Maybe sort the keys to have a deterministic order.


+   if ($opt =~ m/scsi/) {
+   
PVE::QemuServer::check_scsi_feature_compatibility($opt, $created_opts, $conf, 
$storecfg, $param);


Style nit: line too long (100 characters is our limit)

Note that $created_opts was already merged into $conf two lines above.
I'd rather not introduce new usage of that variable.

Can we do the check before creating the drive instead? We know if it's a
CD or pass-through and the path or if it's iscsi ahead of time and that
should be enough for the check, or what am I missing?
I don't think its possible to check in advance as the config can still 
contain a not properly formed path like:
'local-lvm:5', which will be formed to the real path when creating the 
disk or am I mistaken ?

+   }
+   }
if (!$conf->{boot}) {
my $devs = 
PVE::QemuServer::get_default_bootdevices($conf);
$conf->{boot} = PVE::QemuServer::print_bootorder($devs);
@@ -1840,6 +1845,10 @@ my $update_vm_api  = sub {
);
$conf->{pending}->{$_} = $created_opts->{$_} for keys 
$created_opts->%*;
  
+		if ($opt =~ m/scsi/) {

+   PVE::QemuServer::check_scsi_feature_compatibility($opt, 
$created_opts, $conf, $storecfg, $param);
+   }
+
# default legacy boot order implies all cdroms anyway
if (@bootorder) {
# append new CD drives to bootorder to mark them 
bootable
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index dbcd568..919728b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -26,6 +26,7 @@ use Storable qw(dclone);
  use Time::HiRes qw(gettimeofday usleep);
  use URI::Escape;
  use UUID;
+use Data::Dumper;
  

Left-over from debugging ;)? You can also use
use JSON;
print to_json($whatever, { canonical => 1, pretty => 1 });
which is often nicer IMHO

thanks for the hint, I'll have a look 🙂

  use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file);
  use PVE::CGroup;
@@ -1428,6 +1429,53 @@ my sub get_drive_id {
  return "$drive->{interface}$drive->{index}";
  }
  

Can you please split the patch in two? A preparatory ones for creating
this helper and another one for the feature.

I also think the helper would be nicer in the QemuServer/Drive.pm
module, but would need more preparation first to avoid a cyclic
dependency. That is:
* change the function to take $machine_version instead of $machine_type
* move the path_is_scsi and the scsi_inquiry function called by that
also to the drive module.


+sub get_scsi_devicetype {
+my ($drive, $storecfg, $machine_type) = @_;

It's unfortunate that this needs both $storecfg and $machine_type just
for that compatibility check for the rather old machine version. But I
guess there's not much we can do at the moment.


+
+my $devicetype = 'hd';
+my $path = '';
+if (drive_is_cdrom($drive)) {
+   $devicetype = 'cd';
+} else {
+   if ($drive->{file} =~ m|^/|) {
+   $path = $drive->{file};
+   if (my $info = path_is_scsi($path)) {
+   if ($info->{type} == 0 && $drive->{scsiblock}) {
+   $devicetype = 'block';
+   } elsif ($info->{type} == 1) { # tape
+   $devicetype = 'generic';
+   }
+   }
+   } else {
+   $path = PVE::Storage::path($storecfg, $drive->{file});
+   }
+
+   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
+   my $version = kvm_user_version();
+   $version = extract_version($machine_type, $version);

Re: [pve-devel] [PATCH pve-firewall 1/2] Manually construct guest config path

2023-11-08 Thread Thomas Lamprecht
Am 08/11/2023 um 12:35 schrieb Stefan Lendl:
> Remove require QemuConfig from Firewall.pm
> We only use it to construct the guest config paths.
> Fixes circular include when accessing Firewall::Aliases from
> pve-network.
> 

This won't work as now cfs_read_file only works by luck, if at all, as the
cfs_read_file needs the cfs_register_file that happens in PVE::QemuServer
so that the parser is actually available...

I'd much rather see Firewall be split-up than doing broken hacks and
switching from one of our saner interfaces to manual assembly.



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC pve-network] do not remove DHCP mapping on stop

2023-11-08 Thread DERUMIER, Alexandre
hi,
I'm back from Holiday, and I'll finally time to work on dhcp.


I wonder if we couldn't add a property on subnet or dhcp,
where user could choose between ephemeral ip (create a vm start /
delete at vm stop),

or reserved ip

(reserved a vm|nic create,  deleted a vm|nic delete)


This should match behaviour of different cloud provider (gcp,aws,...)
or other hypervisors.



What do you think about it ?



 Message initial 
De: Stefan Lendl 
Répondre à: Proxmox VE development discussion 
À: pve-devel@lists.proxmox.com
Objet: [pve-devel] [RFC pve-network] do not remove DHCP mapping on stop
Date: 27/10/2023 13:20:20

Signed-off-by: Stefan Lendl 
---
 src/PVE/LXC.pm    | 8 
 src/lxc-pve-poststop-hook | 1 -
 2 files changed, 9 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index bd8eb63..a7de9b8 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -916,14 +916,6 @@ sub vm_stop_cleanup {
 eval {
    my $vollist = PVE::LXC::Config->get_vm_volumes($conf);
    PVE::Storage::deactivate_volumes($storage_cfg, $vollist);
-
-   for my $k (keys %$conf) {
-       next if $k !~ /^net(\d+)/;
-       my $net = PVE::LXC::Config->parse_lxc_network($conf-
>{$k});
-       next if $net->{type} ne 'veth';
-
-       PVE::Network::SDN::Dhcp::remove_mapping($net->{bridge},
$net->{hwaddr});
-   }
 };
 warn $@ if $@; # avoid errors - just warn
 }
diff --git a/src/lxc-pve-poststop-hook b/src/lxc-pve-poststop-hook
index e7d46c7..2fe97ec 100755
--- a/src/lxc-pve-poststop-hook
+++ b/src/lxc-pve-poststop-hook
@@ -12,7 +12,6 @@ use PVE::LXC::Config;
 use PVE::LXC::Tools;
 use PVE::LXC;
 use PVE::Network;
-use PVE::Network::SDN::Dhcp;
 use PVE::RESTEnvironment;
 use PVE::Storage;
 use PVE::Tools;

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-network 2/2] Create a cluster-wide firewall for SDN subnets

2023-11-08 Thread Thomas Lamprecht
Am 08/11/2023 um 12:35 schrieb Stefan Lendl:
> Upon creation of a subnet, we create a cluster-wide firewall alias.
> 
> Signed-off-by: Stefan Lendl 
> ---
> 
> Notes:
> Creates the alias directly when the Subnet is created.
> 
> Other SDN objects are created upon 'Apply': commit_config().
> Although, IPAM creates the subnet right away as well.
> This should not be an issue but is inconsistent.
> 
>  src/PVE/Network/SDN/Subnets.pm | 18 ++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/src/PVE/Network/SDN/Subnets.pm b/src/PVE/Network/SDN/Subnets.pm
> index 6bb42e5..fe67abd 100644
> --- a/src/PVE/Network/SDN/Subnets.pm
> +++ b/src/PVE/Network/SDN/Subnets.pm
> @@ -6,6 +6,7 @@ use warnings;
>  use Net::Subnet qw(subnet_matcher);
>  use Net::IP;
>  use NetAddr::IP qw(:lower);
> +use PVE::API2::Firewall::Aliases;

This would need pve-firewall to get added to the dependency list in the
debian/control file, otherwise it will only work by luck but break, e.g.,
bootstrapping.

>  
>  use PVE::Cluster qw(cfs_read_file cfs_write_file cfs_lock_file);
>  use PVE::Network::SDN::Dns;
> @@ -161,6 +162,13 @@ sub del_dns_ptr_record {
>  $plugin->del_ptr_record($plugin_config, $reversezone, $ip);
>  }
>  
> +sub get_fw_alias_name {
> +my ($subnet) = @_;
> +my $cidr = $subnet->{cidr};
> +$cidr =~ tr/.\//-/;
> +return "$subnet->{zone}_$subnet->{vnet}_$cidr";
> +}

this can easily clash with existing aliases, that then are deleted or addition
fails below.

wouldn't it be nicer if firewall gets the SDN subnets and manages those aliases
in a separate namespaces, i.e., such that they cannot clash with the explicit
aliases from the config?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC pve-network] do not remove DHCP mapping on stop

2023-11-08 Thread Stefan Hanreich
On 11/8/23 15:32, DERUMIER, Alexandre wrote:
> hi,
> I'm back from Holiday, and I'll finally time to work on dhcp.
>

Welcome back! It's also my first day after holiday today.

> I wonder if we couldn't add a property on subnet or dhcp,
> where user could choose between ephemeral ip (create a vm start /
> delete at vm stop),
> 
> or reserved ip
> 
> (reserved a vm|nic create,  deleted a vm|nic delete)
> 

That sounds like something we could implement. I've talked to Stefan
Lendl today and we thought that ephemeral is something we can leave out
for now and implement later (although we definitely wanted to add
support for it!)

Since we want to have at least basic DHCP support in the next release
(which is not far away) we decided to focus on a minimal featureset
first and then build upon it.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 qemu-server] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-08 Thread Thomas Lamprecht
Am 08/11/2023 um 15:28 schrieb Hannes Dürr:
> On 11/8/23 11:04, Fiona Ebner wrote:
>> Am 08.11.23 um 09:51 schrieb Hannes Duerr:
>>> +   if ($opt =~ m/scsi/) {
>>> +   
>>> PVE::QemuServer::check_scsi_feature_compatibility($opt, $created_opts, 
>>> $conf, $storecfg, $param);
>>
>> Style nit: line too long (100 characters is our limit)
>>
>> Note that $created_opts was already merged into $conf two lines above.
>> I'd rather not introduce new usage of that variable.
>>
>> Can we do the check before creating the drive instead? We know if it's a
>> CD or pass-through and the path or if it's iscsi ahead of time and that
>> should be enough for the check, or what am I missing?
> I don't think its possible to check in advance as the config can still 
> contain a not properly formed path like:
> 'local-lvm:5', which will be formed to the real path when creating the 
> disk or am I mistaken ?

But all information is still there? I.e., the disk's bus, like scsi, and if any
vendor or product properties are set. So you can still parse that value and 
check
validity.

>>> +   }
>>> +   }
>>> if (!$conf->{boot}) {
>>> my $devs = 
>>> PVE::QemuServer::get_default_bootdevices($conf);
>>> $conf->{boot} = PVE::QemuServer::print_bootorder($devs);

>> (...)
> what does that mean ?

That fiona snipped (trimmed) some context due to it not being relevant for
the reveiw, like in quotes, albeit they more often use brackets there, like
"[...] lorem ipsum"



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v4 debcargo-conf 01/11] cherry-pick chumsky 0.9.2 from debian unstable

2023-11-08 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
---
 src/chumsky/debian/changelog   |  5 +++
 src/chumsky/debian/copyright   | 39 +
 src/chumsky/debian/copyright.debcargo.hint | 51 ++
 src/chumsky/debian/debcargo.toml   |  2 +
 4 files changed, 97 insertions(+)
 create mode 100644 src/chumsky/debian/changelog
 create mode 100644 src/chumsky/debian/copyright
 create mode 100644 src/chumsky/debian/copyright.debcargo.hint
 create mode 100644 src/chumsky/debian/debcargo.toml

diff --git a/src/chumsky/debian/changelog b/src/chumsky/debian/changelog
new file mode 100644
index 0..ae6b5ff8f
--- /dev/null
+++ b/src/chumsky/debian/changelog
@@ -0,0 +1,5 @@
+rust-chumsky (0.9.2-1) unstable; urgency=medium
+
+  * Package chumsky 0.9.2 from crates.io using debcargo 2.6.0
+
+ -- Jelmer Vernooij   Wed, 14 Jun 2023 23:38:48 +0100
diff --git a/src/chumsky/debian/copyright b/src/chumsky/debian/copyright
new file mode 100644
index 0..eaa3e6768
--- /dev/null
+++ b/src/chumsky/debian/copyright
@@ -0,0 +1,39 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Upstream-Name: chumsky
+Upstream-Contact:
+ Joshua Barretto 
+ Elijah Hartvigsen 
+Source: https://github.com/zesterer/chumsky
+
+Files: *
+Copyright:
+ 2021-2023 Joshua Barretto 
+ 2021-2023 Elijah Hartvigsen 
+License: MIT
+
+Files: debian/*
+Copyright:
+ 2023 Debian Rust Maintainers 
+ 2023 Jelmer Vernooij 
+License: MIT
+
+License: MIT
+ Permission is hereby granted, free of charge, to any person obtaining a copy
+ of this software and associated documentation files (the "Software"), to deal
+ in the Software without restriction, including without limitation the rights
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ copies of the Software, and to permit persons to whom the Software is
+ furnished to do so, subject to the following conditions:
+ .
+ The above copyright notice and this permission notice shall be included in all
+ copies or substantial portions of the Software.
+ .
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL THE
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ SOFTWARE.
diff --git a/src/chumsky/debian/copyright.debcargo.hint 
b/src/chumsky/debian/copyright.debcargo.hint
new file mode 100644
index 0..e02a9ab0f
--- /dev/null
+++ b/src/chumsky/debian/copyright.debcargo.hint
@@ -0,0 +1,51 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Upstream-Name: chumsky
+Upstream-Contact:
+ Joshua Barretto 
+ Elijah Hartvigsen 
+Source: https://github.com/zesterer/chumsky
+
+Files: *
+Copyright:
+ FIXME (overlay) UNKNOWN-YEARS Joshua Barretto 
+ FIXME (overlay) UNKNOWN-YEARS Elijah Hartvigsen 
+License: MIT
+Comment:
+ FIXME (overlay): Since upstream copyright years are not available in
+ Cargo.toml, they were extracted from the upstream Git repository. This may not
+ be correct information so you should review and fix this before uploading to
+ the archive.
+
+Files: LICENSE
+Copyright: 2021 Joshua Barretto
+License: UNKNOWN-LICENSE; FIXME (overlay)
+Comment:
+ FIXME (overlay): These notices are extracted from files. Please review them
+ before uploading to the archive.
+
+Files: debian/*
+Copyright:
+ 2023 Debian Rust Maintainers 
+ 2023 Jelmer Vernooij 
+License: MIT
+
+License: MIT
+ Permission is hereby granted, free of charge, to any person obtaining a copy
+ of this software and associated documentation files (the "Software"), to deal
+ in the Software without restriction, including without limitation the rights
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ copies of the Software, and to permit persons to whom the Software is
+ furnished to do so, subject to the following conditions:
+ .
+ The above copyright notice and this permission notice shall be included in all
+ copies or substantial portions of the Software.
+ .
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL THE
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ SOFTWARE.
diff --git a/src/chumsky/debian/debcargo.toml b/src/chumsky/debian/debcargo.toml
new file mode 100644
index 0..77e8151ed
--- /dev/null
+++ b/src/chumsky/debian/debcargo.toml
@@ -0,0 +1,2 @@
+overlay = "."
+uploaders = ["Jel

[pve-devel] [PATCH v4 many 00/11] notifications: add SMTP endpoint

2023-11-08 Thread Lukas Wagner
This patch series adds support for a new notification endpoint type,
smtp. As the name suggests, this new endpoint allows PVE to talk
to SMTP server directly, without using the system's MTA (postfix).

On the Rust side, these patches add a new dependency to the `lettre`
crate for SMTP communication. This crate was chosen as it is:
  - by far the most popular mailing crate for Rust
  - well maintained
  - has reasonable dependencies
  - has async support, enabling us to asyncify the proxmox-notify
crate at some point, if needed

Tested against:
  - the gmail SMTP server
  - the posteo SMTP server
  - our own webmail SMTP server

These patches require a couple of other patches that have not been applied yet:

  series: "overhaul notification system, use matchers instead of filters" [3], 
  pve-manager: "api: notifications: give targets and matchers their own ACL 
namespace" [4]
  pve-docs: "notifications: update docs to for matcher-based notifications" [5]

The first two patches were cherry-picked and rebased from the  'system mail 
forwarding' patch series from [2].  I decided to pull them in so that I can
already implement the mail forwarding part for SMTP targets.

This series also required updating the 'lettre' crate since
one of lettre's deps was bumped to a new version by us.

Changes since v3:
  - Rebased on top of the matcher-based notification revamp
  - Removed 'filter' setting from target configuration
  - Pulled in required patches from 'system mail forwarding' patch series

Changes since v2:
  - Rebased proxmox-widget-toolkit onto the latest master to avoid
any conflicts.

Changes since v1:
  - Rebased on top of [1]
  - Added a mechanism for mails forwarded by `proxmox-mail-forward`
These are forwarded inline as "message/rfc822" to avoid having 
to rewrite mail headers (otherwise, some SMTP relays might reject the 
mail, because the `From` header of the forwarded mail does not match the
mail account)

[1] https://lists.proxmox.com/pipermail/pve-devel/2023-August/058956.html
[2] https://lists.proxmox.com/pipermail/pve-devel/2023-October/059299.html
[3] https://lists.proxmox.com/pipermail/pve-devel/2023-November/059818.html
[4] https://lists.proxmox.com/pipermail/pve-devel/2023-November/059843.html
[5] https://lists.proxmox.com/pipermail/pve-devel/2023-November/059872.html



debcargo-conf:

Lukas Wagner (2):
  cherry-pick chumsky 0.9.2 from debian unstable
  update lettre to 0.11.1

 src/chumsky/debian/changelog  |  5 ++
 src/chumsky/debian/copyright  | 39 +++
 src/chumsky/debian/copyright.debcargo.hint| 51 ++
 src/chumsky/debian/debcargo.toml  |  2 +
 src/lettre/debian/changelog   | 10 +++
 .../debian/patches/downgrade_fastrand.patch   | 13 
 .../debian/patches/downgrade_idna.patch   | 13 
 src/lettre/debian/patches/downgrade_url.patch | 13 
 .../patches/remove_unused_features.patch  | 69 ++-
 src/lettre/debian/patches/series  |  4 +-
 .../patches/upgrade_quoted_printable.patch| 13 
 11 files changed, 185 insertions(+), 47 deletions(-)
 create mode 100644 src/chumsky/debian/changelog
 create mode 100644 src/chumsky/debian/copyright
 create mode 100644 src/chumsky/debian/copyright.debcargo.hint
 create mode 100644 src/chumsky/debian/debcargo.toml
 create mode 100644 src/lettre/debian/patches/downgrade_fastrand.patch
 create mode 100644 src/lettre/debian/patches/downgrade_idna.patch
 create mode 100644 src/lettre/debian/patches/downgrade_url.patch
 delete mode 100644 src/lettre/debian/patches/upgrade_quoted_printable.patch


proxmox:

Lukas Wagner (4):
  sys: email: add `forward`
  notify: add mechanisms for email message forwarding
  notify: add 'smtp' endpoint
  notify: add api for smtp endpoints

 Cargo.toml  |   2 +
 proxmox-notify/Cargo.toml   |   6 +-
 proxmox-notify/src/api/mod.rs   |  33 ++
 proxmox-notify/src/api/smtp.rs  | 356 
 proxmox-notify/src/config.rs|  23 ++
 proxmox-notify/src/endpoints/common/mail.rs |  24 ++
 proxmox-notify/src/endpoints/common/mod.rs  |   2 +
 proxmox-notify/src/endpoints/gotify.rs  |   3 +
 proxmox-notify/src/endpoints/mod.rs |   4 +
 proxmox-notify/src/endpoints/sendmail.rs|  27 +-
 proxmox-notify/src/endpoints/smtp.rs| 250 ++
 proxmox-notify/src/lib.rs   |  57 
 proxmox-sys/src/email.rs|  52 ++-
 13 files changed, 820 insertions(+), 19 deletions(-)
 create mode 100644 proxmox-notify/src/api/smtp.rs
 create mode 100644 proxmox-notify/src/endpoints/common/mail.rs
 create mode 100644 proxmox-notify/src/endpoints/common/mod.rs
 create mode 100644 proxmox-notify/src/endpoints/smtp.rs


proxmox-perl-rs:

Lukas Wagner (1):
  notify: add bindings for smtp API calls

 common/src/notify.rs | 106 ++

[pve-devel] [PATCH v4 pve-docs 11/11] notifications: document 'comment' option for targets/matchers

2023-11-08 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
---
 notifications.adoc | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/notifications.adoc b/notifications.adoc
index acbdfae..e8ed51b 100644
--- a/notifications.adoc
+++ b/notifications.adoc
@@ -67,6 +67,7 @@ accomodate multiple recipients.
 set, the plugin will fall back to the `email_from` setting from
 `datacenter.cfg`. If that is also not set, the plugin will default to
 `root@$hostname`, where `$hostname` is the hostname of the node.
+* `comment`: Comment for this target
 The `From` header in the email will be set to `$author <$from-address>`.
 
 Example configuration (`/etc/pve/notifications.cfg`):
@@ -138,6 +139,7 @@ The configuration for Gotify target plugins has the 
following options:
 * `server`: The base URL of the Gotify server, e.g. `http://:`
 * `token`: The authentication token. Tokens can be generated within the Gotify
 web interface.
+* `comment`: Comment for this target
 
 NOTE: The Gotify target plugin will respect the HTTP proxy settings from the
  xref:datacenter_configuration_file[datacenter configuration]
@@ -192,6 +194,7 @@ a matcher must be true. Defaults to `all`.
 * `match-calendar`: Match the notification's timestamp against a schedule
 * `match-field`: Match the notification's metadata fields
 * `match-severity`: Match the notification's severity
+* `comment`: Comment for this matcher
 
 Calendar Matching Rules
 ~~~
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 pve-manager 08/11] notify: add API routes for smtp endpoints

2023-11-08 Thread Lukas Wagner
The Perl part of the API methods primarily defines the API schema,
checks for any needed privileges and then calls the actual Rust
implementation exposed via perlmod. Any errors returned by the Rust
code are translated into PVE::Exception, so that the API call fails
with the correct HTTP error code.

Signed-off-by: Lukas Wagner 
---
 PVE/API2/Cluster/Notifications.pm | 323 ++
 1 file changed, 323 insertions(+)

diff --git a/PVE/API2/Cluster/Notifications.pm 
b/PVE/API2/Cluster/Notifications.pm
index 6ff6d89e..81a8c5af 100644
--- a/PVE/API2/Cluster/Notifications.pm
+++ b/PVE/API2/Cluster/Notifications.pm
@@ -193,6 +193,14 @@ __PACKAGE__->register_method ({
};
}
 
+   for my $target (@{$config->get_smtp_endpoints()}) {
+   push @$result, {
+   name => $target->{name},
+   comment => $target->{comment},
+   type => 'smtp',
+   };
+   }
+
$result
};
 
@@ -757,6 +765,321 @@ __PACKAGE__->register_method ({
 }
 });
 
+my $smtp_properties= {
+name => {
+   description => 'The name of the endpoint.',
+   type => 'string',
+   format => 'pve-configid',
+},
+server => {
+   description => 'The address of the SMTP server.',
+   type => 'string',
+},
+port => {
+   description => 'The port to be used. Defaults to 465 for TLS based 
connections,'
+   . ' 587 for STARTTLS based connections and port 25 for insecure 
plain-text'
+   . ' connections.',
+   type => 'integer',
+   optional => 1,
+},
+mode => {
+   description => 'Determine which encryption method shall be used for the 
connection.',
+   type => 'string',
+   enum => [ qw(insecure starttls tls) ],
+   default => 'tls',
+   optional => 1,
+},
+username => {
+   description => 'Username for SMTP authentication',
+   type => 'string',
+   optional => 1,
+},
+password => {
+   description => 'Password for SMTP authentication',
+   type => 'string',
+   optional => 1,
+},
+mailto => {
+   type => 'array',
+   items => {
+   type => 'string',
+   format => 'email-or-username',
+   },
+   description => 'List of email recipients',
+   optional => 1,
+},
+'mailto-user' => {
+   type => 'array',
+   items => {
+   type => 'string',
+   format => 'pve-userid',
+   },
+   description => 'List of users',
+   optional => 1,
+},
+'from-address' => {
+   description => '`From` address for the mail',
+   type => 'string',
+},
+author => {
+   description => 'Author of the mail. Defaults to \'Proxmox VE\'.',
+   type => 'string',
+   optional => 1,
+},
+'comment' => {
+   description => 'Comment',
+   type=> 'string',
+   optional=> 1,
+},
+};
+
+__PACKAGE__->register_method ({
+name => 'get_smtp_endpoints',
+path => 'endpoints/smtp',
+method => 'GET',
+description => 'Returns a list of all smtp endpoints',
+permissions => {
+   description => "Only lists entries where you have 'Mapping.Modify', 
'Mapping.Use' or"
+   . " 'Mapping.Audit' permissions on 
'/mapping/notification/targets/'.",
+   user => 'all',
+},
+protected => 1,
+parameters => {
+   additionalProperties => 0,
+   properties => {},
+},
+returns => {
+   type => 'array',
+   items => {
+   type => 'object',
+   properties => $smtp_properties,
+   },
+   links => [ { rel => 'child', href => '{name}' } ],
+},
+code => sub {
+   my $config = PVE::Notify::read_config();
+   my $rpcenv = PVE::RPCEnvironment::get();
+
+   my $entities = eval {
+   $config->get_smtp_endpoints();
+   };
+   raise_api_error($@) if $@;
+
+   return filter_entities_by_privs($rpcenv, "targets", $entities);
+}
+});
+
+__PACKAGE__->register_method ({
+name => 'get_smtp_endpoint',
+path => 'endpoints/smtp/{name}',
+method => 'GET',
+description => 'Return a specific smtp endpoint',
+permissions => {
+   check => ['or',
+   ['perm', '/mapping/notification/targets/{name}', 
['Mapping.Modify']],
+   ['perm', '/mapping/notification/targets/{name}', ['Mapping.Audit']],
+   ],
+},
+protected => 1,
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   name => {
+   type => 'string',
+   format => 'pve-configid',
+   },
+   }
+},
+returns => {
+   type => 'object',
+   properties => {
+   %{ remove_protected_properties($smtp_properties, ['password']) },
+   digest => get_standard_option('pve-config-digest'),
+   }
+
+},
+code => sub {
+   my ($param) = @_;
+   my $name = extract_param(

[pve-devel] [PATCH v4 proxmox-perl-rs 07/11] notify: add bindings for smtp API calls

2023-11-08 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
---
 common/src/notify.rs | 106 +++
 1 file changed, 106 insertions(+)

diff --git a/common/src/notify.rs b/common/src/notify.rs
index 4fbd705..8a6d76e 100644
--- a/common/src/notify.rs
+++ b/common/src/notify.rs
@@ -15,6 +15,10 @@ mod export {
 use proxmox_notify::endpoints::sendmail::{
 DeleteableSendmailProperty, SendmailConfig, SendmailConfigUpdater,
 };
+use proxmox_notify::endpoints::smtp::{
+DeleteableSmtpProperty, SmtpConfig, SmtpConfigUpdater, SmtpMode, 
SmtpPrivateConfig,
+SmtpPrivateConfigUpdater,
+};
 use proxmox_notify::matcher::{
 CalendarMatcher, DeleteableMatcherProperty, FieldMatcher, 
MatchModeOperator, MatcherConfig,
 MatcherConfigUpdater, SeverityMatcher,
@@ -271,6 +275,108 @@ mod export {
 api::gotify::delete_gotify_endpoint(&mut config, name)
 }
 
+#[export(serialize_error)]
+fn get_smtp_endpoints(
+#[try_from_ref] this: &NotificationConfig,
+) -> Result, HttpError> {
+let config = this.config.lock().unwrap();
+api::smtp::get_endpoints(&config)
+}
+
+#[export(serialize_error)]
+fn get_smtp_endpoint(
+#[try_from_ref] this: &NotificationConfig,
+id: &str,
+) -> Result {
+let config = this.config.lock().unwrap();
+api::smtp::get_endpoint(&config, id)
+}
+
+#[export(serialize_error)]
+#[allow(clippy::too_many_arguments)]
+fn add_smtp_endpoint(
+#[try_from_ref] this: &NotificationConfig,
+name: String,
+server: String,
+port: Option,
+mode: Option,
+username: Option,
+password: Option,
+mailto: Option>,
+mailto_user: Option>,
+from_address: String,
+author: Option,
+comment: Option,
+) -> Result<(), HttpError> {
+let mut config = this.config.lock().unwrap();
+api::smtp::add_endpoint(
+&mut config,
+&SmtpConfig {
+name: name.clone(),
+server,
+port,
+mode,
+username,
+mailto,
+mailto_user,
+from_address,
+author,
+comment,
+},
+&SmtpPrivateConfig { name, password },
+)
+}
+
+#[export(serialize_error)]
+#[allow(clippy::too_many_arguments)]
+fn update_smtp_endpoint(
+#[try_from_ref] this: &NotificationConfig,
+name: &str,
+server: Option,
+port: Option,
+mode: Option,
+username: Option,
+password: Option,
+mailto: Option>,
+mailto_user: Option>,
+from_address: Option,
+author: Option,
+comment: Option,
+delete: Option>,
+digest: Option<&str>,
+) -> Result<(), HttpError> {
+let mut config = this.config.lock().unwrap();
+let digest = decode_digest(digest)?;
+
+api::smtp::update_endpoint(
+&mut config,
+name,
+&SmtpConfigUpdater {
+server,
+port,
+mode,
+username,
+mailto,
+mailto_user,
+from_address,
+author,
+comment,
+},
+&SmtpPrivateConfigUpdater { password },
+delete.as_deref(),
+digest.as_deref(),
+)
+}
+
+#[export(serialize_error)]
+fn delete_smtp_endpoint(
+#[try_from_ref] this: &NotificationConfig,
+name: &str,
+) -> Result<(), HttpError> {
+let mut config = this.config.lock().unwrap();
+api::smtp::delete_endpoint(&mut config, name)
+}
+
 #[export(serialize_error)]
 fn get_matchers(
 #[try_from_ref] this: &NotificationConfig,
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 proxmox 06/11] notify: add api for smtp endpoints

2023-11-08 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
---
 proxmox-notify/src/api/mod.rs|  33 +++
 proxmox-notify/src/api/smtp.rs   | 356 +++
 proxmox-notify/src/endpoints/smtp.rs |   8 -
 3 files changed, 389 insertions(+), 8 deletions(-)
 create mode 100644 proxmox-notify/src/api/smtp.rs

diff --git a/proxmox-notify/src/api/mod.rs b/proxmox-notify/src/api/mod.rs
index 8042157..762d448 100644
--- a/proxmox-notify/src/api/mod.rs
+++ b/proxmox-notify/src/api/mod.rs
@@ -1,3 +1,4 @@
+use serde::Serialize;
 use std::collections::HashSet;
 
 use proxmox_http_error::HttpError;
@@ -10,6 +11,8 @@ pub mod gotify;
 pub mod matcher;
 #[cfg(feature = "sendmail")]
 pub mod sendmail;
+#[cfg(feature = "smtp")]
+pub mod smtp;
 
 // We have our own, local versions of http_err and http_bail, because
 // we don't want to wrap the error in anyhow::Error. If we were to do that,
@@ -60,6 +63,10 @@ fn ensure_endpoint_exists(#[allow(unused)] config: &Config, 
name: &str) -> Resul
 {
 exists = exists || gotify::get_endpoint(config, name).is_ok();
 }
+#[cfg(feature = "smtp")]
+{
+exists = exists || smtp::get_endpoint(config, name).is_ok();
+}
 
 if !exists {
 http_bail!(NOT_FOUND, "endpoint '{name}' does not exist")
@@ -100,6 +107,7 @@ fn get_referrers(config: &Config, entity: &str) -> 
Result, HttpE
 }
 }
 }
+
 Ok(referrers)
 }
 
@@ -148,6 +156,31 @@ fn get_referenced_entities(config: &Config, entity: &str) 
-> HashSet {
 expanded
 }
 
+#[allow(unused)]
+fn set_private_config_entry(
+config: &mut Config,
+private_config: &T,
+typename: &str,
+name: &str,
+) -> Result<(), HttpError> {
+config
+.private_config
+.set_data(name, typename, private_config)
+.map_err(|e| {
+http_err!(
+INTERNAL_SERVER_ERROR,
+"could not save private config for endpoint '{}': {e}",
+name
+)
+})
+}
+
+#[allow(unused)]
+fn remove_private_config_entry(config: &mut Config, name: &str) -> Result<(), 
HttpError> {
+config.private_config.sections.remove(name);
+Ok(())
+}
+
 #[cfg(test)]
 mod test_helpers {
 use crate::Config;
diff --git a/proxmox-notify/src/api/smtp.rs b/proxmox-notify/src/api/smtp.rs
new file mode 100644
index 000..bd9d7bb
--- /dev/null
+++ b/proxmox-notify/src/api/smtp.rs
@@ -0,0 +1,356 @@
+use proxmox_http_error::HttpError;
+
+use crate::api::{http_bail, http_err};
+use crate::endpoints::smtp::{
+DeleteableSmtpProperty, SmtpConfig, SmtpConfigUpdater, SmtpPrivateConfig,
+SmtpPrivateConfigUpdater, SMTP_TYPENAME,
+};
+use crate::Config;
+
+/// Get a list of all smtp endpoints.
+///
+/// The caller is responsible for any needed permission checks.
+/// Returns a list of all smtp endpoints or a `HttpError` if the config is
+/// erroneous (`500 Internal server error`).
+pub fn get_endpoints(config: &Config) -> Result, HttpError> {
+config
+.config
+.convert_to_typed_array(SMTP_TYPENAME)
+.map_err(|e| http_err!(NOT_FOUND, "Could not fetch endpoints: {e}"))
+}
+
+/// Get smtp endpoint with given `name`.
+///
+/// The caller is responsible for any needed permission checks.
+/// Returns the endpoint or a `HttpError` if the endpoint was not found (`404 
Not found`).
+pub fn get_endpoint(config: &Config, name: &str) -> Result {
+config
+.config
+.lookup(SMTP_TYPENAME, name)
+.map_err(|_| http_err!(NOT_FOUND, "endpoint '{name}' not found"))
+}
+
+/// Add a new smtp endpoint.
+///
+/// The caller is responsible for any needed permission checks.
+/// The caller also responsible for locking the configuration files.
+/// Returns a `HttpError` if:
+///   - an entity with the same name already exists (`400 Bad request`)
+///   - the configuration could not be saved (`500 Internal server error`)
+///   - mailto *and* mailto_user are both set to `None`
+pub fn add_endpoint(
+config: &mut Config,
+endpoint_config: &SmtpConfig,
+private_endpoint_config: &SmtpPrivateConfig,
+) -> Result<(), HttpError> {
+if endpoint_config.name != private_endpoint_config.name {
+// Programming error by the user of the crate, thus we panic
+panic!("name for endpoint config and private config must be 
identical");
+}
+
+super::ensure_unique(config, &endpoint_config.name)?;
+
+if endpoint_config.mailto.is_none() && 
endpoint_config.mailto_user.is_none() {
+http_bail!(
+BAD_REQUEST,
+"must at least provide one recipient, either in mailto or in 
mailto-user"
+);
+}
+
+super::set_private_config_entry(
+config,
+private_endpoint_config,
+SMTP_TYPENAME,
+&endpoint_config.name,
+)?;
+
+config
+.config
+.set_data(&endpoint_config.name, SMTP_TYPENAME, endpoint_config)
+.map_err(|e| {
+http_err!(
+INTERNAL_SERVER_ERROR,
+

[pve-devel] [PATCH v4 proxmox 03/11] sys: email: add `forward`

2023-11-08 Thread Lukas Wagner
This new function forwards an email to new recipients.

Signed-off-by: Lukas Wagner 
---
 proxmox-sys/src/email.rs | 52 +++-
 1 file changed, 51 insertions(+), 1 deletion(-)

diff --git a/proxmox-sys/src/email.rs b/proxmox-sys/src/email.rs
index 8b3a1b6..c94f634 100644
--- a/proxmox-sys/src/email.rs
+++ b/proxmox-sys/src/email.rs
@@ -3,7 +3,7 @@
 use std::io::Write;
 use std::process::{Command, Stdio};
 
-use anyhow::{bail, Error};
+use anyhow::{bail, format_err, Error};
 
 /// Sends multi-part mail with text and/or html to a list of recipients
 ///
@@ -110,6 +110,56 @@ pub fn sendmail(
 Ok(())
 }
 
+/// Forwards an email message to a given list of recipients.
+///
+/// ``sendmail`` is used for sending the mail, thus `message` must be
+/// compatible with that (the message is piped into stdin unmodified).
+pub fn forward(
+mailto: &[&str],
+mailfrom: &str,
+message: &[u8],
+uid: Option,
+) -> Result<(), Error> {
+use std::os::unix::process::CommandExt;
+
+if mailto.is_empty() {
+bail!("At least one recipient has to be specified!")
+}
+
+let mut builder = Command::new("/usr/sbin/sendmail");
+
+builder
+.args([
+"-N", "never", // never send DSN (avoid mail loops)
+"-f", mailfrom, "--",
+])
+.args(mailto)
+.stdin(Stdio::piped())
+.stdout(Stdio::null())
+.stderr(Stdio::null());
+
+if let Some(uid) = uid {
+builder.uid(uid);
+}
+
+let mut process = builder
+.spawn()
+.map_err(|err| format_err!("could not spawn sendmail process: 
{err}"))?;
+
+process
+.stdin
+.take()
+.unwrap()
+.write_all(message)
+.map_err(|err| format_err!("couldn't write to sendmail stdin: 
{err}"))?;
+
+process
+.wait()
+.map_err(|err| format_err!("sendmail did not exit successfully: 
{err}"))?;
+
+Ok(())
+}
+
 #[cfg(test)]
 mod test {
 use crate::email::sendmail;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 proxmox 04/11] notify: add mechanisms for email message forwarding

2023-11-08 Thread Lukas Wagner
As preparation for the integration of `proxmox-mail-foward` into the
notification system, this commit makes a few changes that allow us to
forward raw email messages (as passed from postfix).

For mail-based notification targets, the email will be forwarded
as-is, including all headers. The only thing that changes is the
message envelope.
For other notification targets, the mail is parsed using the
`mail-parser` crate, which allows us to extract a subject and a body.
As a body we use the plain-text version of the mail. If an email is
HTML-only, the `mail-parser` crate will automatically attempt to
transform the HTML into readable plain text.

Signed-off-by: Lukas Wagner 
---
 Cargo.toml   |  1 +
 proxmox-notify/Cargo.toml|  2 ++
 proxmox-notify/src/endpoints/gotify.rs   |  3 ++
 proxmox-notify/src/endpoints/sendmail.rs |  5 +++
 proxmox-notify/src/lib.rs| 41 
 5 files changed, 52 insertions(+)

diff --git a/Cargo.toml b/Cargo.toml
index f8bc181..3d81d85 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -64,6 +64,7 @@ lazy_static = "1.4"
 ldap3 = { version = "0.11", default-features = false }
 libc = "0.2.107"
 log = "0.4.17"
+mail-parser = "0.8.2"
 native-tls = "0.2"
 nix = "0.26.1"
 once_cell = "1.3.1"
diff --git a/proxmox-notify/Cargo.toml b/proxmox-notify/Cargo.toml
index 4812896..f2b4db5 100644
--- a/proxmox-notify/Cargo.toml
+++ b/proxmox-notify/Cargo.toml
@@ -12,6 +12,7 @@ anyhow.workspace = true
 handlebars = { workspace = true }
 lazy_static.workspace = true
 log.workspace = true
+mail-parser = { workspace = true, optional = true }
 once_cell.workspace = true
 openssl.workspace = true
 proxmox-http = { workspace = true, features = ["client-sync"], optional = true 
}
@@ -28,5 +29,6 @@ serde_json.workspace = true
 
 [features]
 default = ["sendmail", "gotify"]
+mail-forwarder = ["dep:mail-parser"]
 sendmail = ["dep:proxmox-sys"]
 gotify = ["dep:proxmox-http"]
diff --git a/proxmox-notify/src/endpoints/gotify.rs 
b/proxmox-notify/src/endpoints/gotify.rs
index 1c307a4..5713d99 100644
--- a/proxmox-notify/src/endpoints/gotify.rs
+++ b/proxmox-notify/src/endpoints/gotify.rs
@@ -19,6 +19,7 @@ fn severity_to_priority(level: Severity) -> u32 {
 Severity::Notice => 3,
 Severity::Warning => 5,
 Severity::Error => 9,
+Severity::Unknown => 3,
 }
 }
 
@@ -94,6 +95,8 @@ impl Endpoint for GotifyEndpoint {
 
 (rendered_title, rendered_message)
 }
+#[cfg(feature = "mail-forwarder")]
+Content::ForwardedMail { title, body, .. } => (title.clone(), 
body.clone()),
 };
 
 // We don't have a TemplateRenderer::Markdown yet, so simply put 
everything
diff --git a/proxmox-notify/src/endpoints/sendmail.rs 
b/proxmox-notify/src/endpoints/sendmail.rs
index a601744..3ef33b6 100644
--- a/proxmox-notify/src/endpoints/sendmail.rs
+++ b/proxmox-notify/src/endpoints/sendmail.rs
@@ -134,6 +134,11 @@ impl Endpoint for SendmailEndpoint {
 )
 .map_err(|err| Error::NotifyFailed(self.config.name.clone(), 
err.into()))
 }
+#[cfg(feature = "mail-forwarder")]
+Content::ForwardedMail { raw, uid, .. } => {
+proxmox_sys::email::forward(&recipients_str, &mailfrom, raw, 
*uid)
+.map_err(|err| 
Error::NotifyFailed(self.config.name.clone(), err.into()))
+}
 }
 }
 
diff --git a/proxmox-notify/src/lib.rs b/proxmox-notify/src/lib.rs
index 9997cef..ada1b0a 100644
--- a/proxmox-notify/src/lib.rs
+++ b/proxmox-notify/src/lib.rs
@@ -102,6 +102,8 @@ pub enum Severity {
 Warning,
 /// Error
 Error,
+/// Unknown severity (e.g. forwarded system mails)
+Unknown,
 }
 
 impl Display for Severity {
@@ -111,6 +113,7 @@ impl Display for Severity {
 Severity::Notice => f.write_str("notice"),
 Severity::Warning => f.write_str("warning"),
 Severity::Error => f.write_str("error"),
+Severity::Unknown => f.write_str("unknown"),
 }
 }
 }
@@ -123,6 +126,7 @@ impl FromStr for Severity {
 "notice" => Ok(Self::Notice),
 "warning" => Ok(Self::Warning),
 "error" => Ok(Self::Error),
+"unknown" => Ok(Self::Unknown),
 _ => Err(Error::Generic(format!("invalid severity {s}"))),
 }
 }
@@ -148,6 +152,18 @@ pub enum Content {
 /// Data that can be used for template rendering.
 data: Value,
 },
+#[cfg(feature = "mail-forwarder")]
+ForwardedMail {
+/// Raw mail contents
+raw: Vec,
+/// Fallback title
+title: String,
+/// Fallback body
+body: String,
+/// UID to use when calling sendmail
+#[allow(dead_code)] // Unused in some feature flag permutations
+uid: Option,
+},
 }
 
 #[derive(Debug, Clone)]
@@ -190,6 +206,31 @@ impl Notificatio

[pve-devel] [PATCH v4 debcargo-conf 02/11] update lettre to 0.11.1

2023-11-08 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
---
 src/lettre/debian/changelog   | 10 +++
 .../debian/patches/downgrade_fastrand.patch   | 13 
 .../debian/patches/downgrade_idna.patch   | 13 
 src/lettre/debian/patches/downgrade_url.patch | 13 
 .../patches/remove_unused_features.patch  | 69 ++-
 src/lettre/debian/patches/series  |  4 +-
 .../patches/upgrade_quoted_printable.patch| 13 
 7 files changed, 88 insertions(+), 47 deletions(-)
 create mode 100644 src/lettre/debian/patches/downgrade_fastrand.patch
 create mode 100644 src/lettre/debian/patches/downgrade_idna.patch
 create mode 100644 src/lettre/debian/patches/downgrade_url.patch
 delete mode 100644 src/lettre/debian/patches/upgrade_quoted_printable.patch

diff --git a/src/lettre/debian/changelog b/src/lettre/debian/changelog
index d49cbb042..e92c5c070 100644
--- a/src/lettre/debian/changelog
+++ b/src/lettre/debian/changelog
@@ -1,3 +1,13 @@
+rust-lettre (0.11.1-1) UNRELEASED-FIXME-AUTOGENERATED-DEBCARGO; urgency=medium
+
+  * Package lettre 0.11.1 from crates.io using debcargo 2.6.0
+  * Downgrade fastrand from 2.0 to 1.8
+  * Downgrade idna from 0.4 to 0.3
+  * Downgrade url from 2.4 to 2.3
+  * Drop patch that upgrades quoted_printable
+
+ -- Lukas Wagner   Wed, 08 Nov 2023 13:32:49 +0100
+
 rust-lettre (0.10.4-1~bpo12+pve1) proxmox-rust; urgency=medium
 
   * Rebuild for Debian Bookworm / Proxmox
diff --git a/src/lettre/debian/patches/downgrade_fastrand.patch 
b/src/lettre/debian/patches/downgrade_fastrand.patch
new file mode 100644
index 0..975efeb1c
--- /dev/null
+++ b/src/lettre/debian/patches/downgrade_fastrand.patch
@@ -0,0 +1,13 @@
+diff --git a/Cargo.toml b/Cargo.toml
+index 072ea3a..5decb37 100644
+--- a/Cargo.toml
 b/Cargo.toml
+@@ -150,7 +150,7 @@ version = "0.2.1"
+ default-features = false
+ 
+ [dependencies.fastrand]
+-version = "2.0"
++version = "1.8"
+ optional = true
+ 
+ [dependencies.futures-io]
diff --git a/src/lettre/debian/patches/downgrade_idna.patch 
b/src/lettre/debian/patches/downgrade_idna.patch
new file mode 100644
index 0..1cfaaa26c
--- /dev/null
+++ b/src/lettre/debian/patches/downgrade_idna.patch
@@ -0,0 +1,13 @@
+diff --git a/Cargo.toml b/Cargo.toml
+index 5decb37..09d2b9b 100644
+--- a/Cargo.toml
 b/Cargo.toml
+@@ -176,7 +176,7 @@ version = "1"
+ optional = true
+ 
+ [dependencies.idna]
+-version = "0.4"
++version = "0.3"
+ 
+ [dependencies.mime]
+ version = "0.3.4"
diff --git a/src/lettre/debian/patches/downgrade_url.patch 
b/src/lettre/debian/patches/downgrade_url.patch
new file mode 100644
index 0..4da907540
--- /dev/null
+++ b/src/lettre/debian/patches/downgrade_url.patch
@@ -0,0 +1,13 @@
+diff --git a/Cargo.toml b/Cargo.toml
+index 09d2b9b..5004a3b 100644
+--- a/Cargo.toml
 b/Cargo.toml
+@@ -237,7 +237,7 @@ optional = true
+ default-features = false
+ 
+ [dependencies.url]
+-version = "2.4"
++version = "2.3"
+ optional = true
+ 
+ [dependencies.uuid]
diff --git a/src/lettre/debian/patches/remove_unused_features.patch 
b/src/lettre/debian/patches/remove_unused_features.patch
index 0229e41aa..7ce45be0f 100644
--- a/src/lettre/debian/patches/remove_unused_features.patch
+++ b/src/lettre/debian/patches/remove_unused_features.patch
@@ -1,8 +1,8 @@
 diff --git a/Cargo.toml b/Cargo.toml
-index 13c34b6..b4413b6 100644
+index 13e3b77..072ea3a 100644
 --- a/Cargo.toml
 +++ b/Cargo.toml
-@@ -114,32 +114,10 @@ required-features = [
+@@ -114,24 +114,6 @@ required-features = [
  "builder",
  ]
  
@@ -27,6 +27,9 @@ index 13c34b6..b4413b6 100644
  [[bench]]
  name = "transport_smtp"
  harness = false
+@@ -140,10 +122,6 @@ harness = false
+ name = "mailbox_parsing"
+ harness = false
  
 -[dependencies.async-std]
 -version = "1.8"
@@ -35,8 +38,8 @@ index 13c34b6..b4413b6 100644
  [dependencies.async-trait]
  version = "0.1"
  optional = true
-@@ -217,19 +195,6 @@ optional = true
- version = "0.8"
+@@ -224,19 +202,6 @@ optional = true
+ version = "0.9"
  optional = true
  
 -[dependencies.rustls]
@@ -55,19 +58,19 @@ index 13c34b6..b4413b6 100644
  [dependencies.serde]
  version = "1"
  features = ["derive"]
-@@ -248,11 +213,6 @@ optional = true
- version = "0.4.4"
+@@ -255,11 +220,6 @@ optional = true
+ version = "0.5.1"
  optional = true
  
 -[dependencies.tokio1_boring]
--version = "2.1.4"
+-version = "3"
 -optional = true
 -package = "tokio-boring"
 -
  [dependencies.tokio1_crate]
  version = "1"
  optional = true
-@@ -263,11 +223,6 @@ version = "0.3"
+@@ -270,11 +230,6 @@ version = "0.3"
  optional = true
  package = "tokio-native-tls"
  
@@ -79,8 +82,8 @@ index 13c34b6..b4413b6 100644
  [dependencies.tracing]
  version = "0.1.16"
  features = ["std"]
-@@ -283,10 +238,6 @@ optional = true
- version = "0.23"
+@@ -294,10 +249,6 @@ optional = true
+ version = "0.25"
  optional = true
  
 -[dev-dependencies.async-std]
@@ -88,35 +91,35 @@ index 13c34b6..b4413b6 100644
 -features = ["attributes"]
 -
  [dev-dependencies.criterion]
- ver

[pve-devel] [PATCH v4 pve-docs 10/11] notifications: document SMTP endpoints

2023-11-08 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
---
 notifications.adoc | 47 ++
 1 file changed, 47 insertions(+)

diff --git a/notifications.adoc b/notifications.adoc
index 764ec72..acbdfae 100644
--- a/notifications.adoc
+++ b/notifications.adoc
@@ -67,6 +67,7 @@ accomodate multiple recipients.
 set, the plugin will fall back to the `email_from` setting from
 `datacenter.cfg`. If that is also not set, the plugin will default to
 `root@$hostname`, where `$hostname` is the hostname of the node.
+The `From` header in the email will be set to `$author <$from-address>`.
 
 Example configuration (`/etc/pve/notifications.cfg`):
 
@@ -78,6 +79,52 @@ sendmail: example
 comment Send to multiple users/addresses
 
 
+SMTP
+
+
+SMTP notification targets can send emails directly to an SMTP mail relay.
+
+The configuration for SMTP target plugins has the following options:
+
+* `mailto`: E-Mail address to which the notification shall be sent to. Can be
+set multiple times to accomodate multiple recipients.
+* `mailto-user`: Users to which emails shall be sent to. The user's email
+address will be looked up in `users.cfg`. Can be set multiple times to
+accomodate multiple recipients.
+* `author`: Sets the author of the E-Mail. Defaults to `Proxmox VE`.
+* `from-address`: Sets the From-addresss of the email. SMTP relays might 
require
+that this address is owned by the user in order to avoid spoofing.
+The `From` header in the email will be set to `$author <$from-address>`.
+* `username`: Username to use during authentication. If no username is set,
+no authentication will be performed. The PLAIN and LOGIN authentication methods
+are supported.
+* `password`: Password to use when authenticating.
+* `mode`: Sets the encryption mode (`insecure`, `starttls` or `tls`). Defaults
+to `tls`.
+* `server`: Address/IP of the SMTP relay
+* `port`: The port to connect to. If not set, the used port
+defaults to 25 (`insecure`), 465 (`tls`) or 587 (`starttls`), depending on the
+value of `mode`.
+* `comment`: Comment for this target
+
+Example configuration (`/etc/pve/notifications.cfg`):
+
+smtp: example
+mailto-user root@pam
+mailto-user admin@pve
+mailto m...@example.com
+from-address p...@example.com
+username pve1
+server mail.example.com
+mode starttls
+
+The matching entry in `/etc/pve/priv/notifications.cfg`, containing the
+secret token:
+
+smtp: example
+password somepassword
+
+
 Gotify
 ~~
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 proxmox-widget-toolkit 09/11] panel: notification: add gui for SMTP endpoints

2023-11-08 Thread Lukas Wagner
This new endpoint configuration panel is embedded in the existing
EndpointEditBase dialog window. This commit also factors out some of
the non-trivial common form elements that are shared between the new
panel and the already existing SendmailEditPanel into a separate panel
EmailRecipientPanel.

Signed-off-by: Lukas Wagner 
---
 src/Makefile |   2 +
 src/Schema.js|   5 +
 src/panel/EmailRecipientPanel.js |  88 +++
 src/panel/SendmailEditPanel.js   |  58 +-
 src/panel/SmtpEditPanel.js   | 183 +++
 5 files changed, 281 insertions(+), 55 deletions(-)
 create mode 100644 src/panel/EmailRecipientPanel.js
 create mode 100644 src/panel/SmtpEditPanel.js

diff --git a/src/Makefile b/src/Makefile
index c6d31c3..01145b1 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -71,7 +71,9 @@ JSSRC=\
panel/ACMEAccount.js\
panel/ACMEPlugin.js \
panel/ACMEDomains.js\
+   panel/EmailRecipientPanel.js\
panel/SendmailEditPanel.js  \
+   panel/SmtpEditPanel.js  \
panel/StatusView.js \
panel/TfaView.js\
panel/NotesView.js  \
diff --git a/src/Schema.js b/src/Schema.js
index 37ecd88..28e1037 100644
--- a/src/Schema.js
+++ b/src/Schema.js
@@ -43,6 +43,11 @@ Ext.define('Proxmox.Schema', { // a singleton
ipanel: 'pmxSendmailEditPanel',
iconCls: 'fa-envelope-o',
},
+   smtp: {
+   name: gettext('SMTP'),
+   ipanel: 'pmxSmtpEditPanel',
+   iconCls: 'fa-envelope-o',
+   },
gotify: {
name: gettext('Gotify'),
ipanel: 'pmxGotifyEditPanel',
diff --git a/src/panel/EmailRecipientPanel.js b/src/panel/EmailRecipientPanel.js
new file mode 100644
index 000..b2bc03c
--- /dev/null
+++ b/src/panel/EmailRecipientPanel.js
@@ -0,0 +1,88 @@
+Ext.define('Proxmox.panel.EmailRecipientPanel', {
+extend: 'Ext.panel.Panel',
+xtype: 'pmxEmailRecipientPanel',
+mixins: ['Proxmox.Mixin.CBind'],
+border: false,
+
+mailValidator: function() {
+   let mailto_user = this.down(`[name=mailto-user]`);
+   let mailto = this.down(`[name=mailto]`);
+
+   if (!mailto_user.getValue()?.length && !mailto.getValue()) {
+   return gettext('Either mailto or mailto-user must be set');
+   }
+
+   return true;
+},
+
+items: [
+   {
+   layout: 'anchor',
+   border: false,
+   items: [
+   {
+   xtype: 'pmxUserSelector',
+   name: 'mailto-user',
+   multiSelect: true,
+   allowBlank: true,
+   editable: false,
+   skipEmptyText: true,
+   fieldLabel: gettext('Recipient(s)'),
+   cbind: {
+   deleteEmpty: '{!isCreate}',
+   },
+   validator: function() {
+   return 
this.up('pmxEmailRecipientPanel').mailValidator();
+   },
+   autoEl: {
+   tag: 'div',
+   'data-qtip': gettext('The notification will be sent to 
the user\'s configured mail address'),
+   },
+   listConfig: {
+   width: 600,
+   columns: [
+   {
+   header: gettext('User'),
+   sortable: true,
+   dataIndex: 'userid',
+   renderer: Ext.String.htmlEncode,
+   flex: 1,
+   },
+   {
+   header: gettext('E-Mail'),
+   sortable: true,
+   dataIndex: 'email',
+   renderer: Ext.String.htmlEncode,
+   flex: 1,
+   },
+   {
+   header: gettext('Comment'),
+   sortable: false,
+   dataIndex: 'comment',
+   renderer: Ext.String.htmlEncode,
+   flex: 1,
+   },
+   ],
+   },
+   },
+   {
+   xtype: 'proxmoxtextfield',
+   fieldLabel: gettext('Additional Recipient(s)'),
+   name: 'mailto',
+   allowBlank: true,
+   emptyText: 'u...@example.com, ...',
+   cbind: {
+   deleteEmpty: '{!isCreate}',
+   },
+   autoEl: {
+   tag: 'div',
+   'da

[pve-devel] [PATCH v4 proxmox 05/11] notify: add 'smtp' endpoint

2023-11-08 Thread Lukas Wagner
This commit adds a new endpoint type, namely 'smtp'. This endpoint
uses the `lettre` crate to directly send emails to SMTP relays.

The `lettre` crate was chosen since it is by far the most popular SMTP
implementation for Rust that looks like it is well maintained.
Also, it includes async support (for when we want to extend
proxmox-notify to be async).

For this new endpoint type, a new section-config type was introduced
(smtp). It has the same fields as the type for `sendmail`, with the
addition of some new options (smtp server, authentication, tls mode,
etc.).

Some of the behavior that is shared between sendmail and smtp
endpoints has been moved to a new `endpoints::common::mail` module.

Signed-off-by: Lukas Wagner 
---
 Cargo.toml  |   1 +
 proxmox-notify/Cargo.toml   |   4 +-
 proxmox-notify/src/config.rs|  23 ++
 proxmox-notify/src/endpoints/common/mail.rs |  24 ++
 proxmox-notify/src/endpoints/common/mod.rs  |   2 +
 proxmox-notify/src/endpoints/mod.rs |   4 +
 proxmox-notify/src/endpoints/sendmail.rs|  22 +-
 proxmox-notify/src/endpoints/smtp.rs| 258 
 proxmox-notify/src/lib.rs   |  16 ++
 9 files changed, 336 insertions(+), 18 deletions(-)
 create mode 100644 proxmox-notify/src/endpoints/common/mail.rs
 create mode 100644 proxmox-notify/src/endpoints/common/mod.rs
 create mode 100644 proxmox-notify/src/endpoints/smtp.rs

diff --git a/Cargo.toml b/Cargo.toml
index 3d81d85..9f247be 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -62,6 +62,7 @@ http = "0.2"
 hyper = "0.14.5"
 lazy_static = "1.4"
 ldap3 = { version = "0.11", default-features = false }
+lettre = "0.11.1"
 libc = "0.2.107"
 log = "0.4.17"
 mail-parser = "0.8.2"
diff --git a/proxmox-notify/Cargo.toml b/proxmox-notify/Cargo.toml
index f2b4db5..8741b9b 100644
--- a/proxmox-notify/Cargo.toml
+++ b/proxmox-notify/Cargo.toml
@@ -11,6 +11,7 @@ exclude.workspace = true
 anyhow.workspace = true
 handlebars = { workspace = true }
 lazy_static.workspace = true
+lettre = { workspace = true, optional = true }
 log.workspace = true
 mail-parser = { workspace = true, optional = true }
 once_cell.workspace = true
@@ -28,7 +29,8 @@ serde = { workspace = true, features = ["derive"]}
 serde_json.workspace = true
 
 [features]
-default = ["sendmail", "gotify"]
+default = ["sendmail", "gotify", "smtp"]
 mail-forwarder = ["dep:mail-parser"]
 sendmail = ["dep:proxmox-sys"]
 gotify = ["dep:proxmox-http"]
+smtp = ["dep:lettre"]
diff --git a/proxmox-notify/src/config.rs b/proxmox-notify/src/config.rs
index a86995e..fe25ea7 100644
--- a/proxmox-notify/src/config.rs
+++ b/proxmox-notify/src/config.rs
@@ -28,6 +28,17 @@ fn config_init() -> SectionConfig {
 SENDMAIL_SCHEMA,
 ));
 }
+#[cfg(feature = "smtp")]
+{
+use crate::endpoints::smtp::{SmtpConfig, SMTP_TYPENAME};
+
+const SMTP_SCHEMA: &ObjectSchema = 
SmtpConfig::API_SCHEMA.unwrap_object_schema();
+config.register_plugin(SectionConfigPlugin::new(
+SMTP_TYPENAME.to_string(),
+Some(String::from("name")),
+SMTP_SCHEMA,
+));
+}
 #[cfg(feature = "gotify")]
 {
 use crate::endpoints::gotify::{GotifyConfig, GOTIFY_TYPENAME};
@@ -80,6 +91,18 @@ fn private_config_init() -> SectionConfig {
 ));
 }
 
+#[cfg(feature = "smtp")]
+{
+use crate::endpoints::smtp::{SmtpPrivateConfig, SMTP_TYPENAME};
+
+const SMTP_SCHEMA: &ObjectSchema = 
SmtpPrivateConfig::API_SCHEMA.unwrap_object_schema();
+config.register_plugin(SectionConfigPlugin::new(
+SMTP_TYPENAME.to_string(),
+Some(String::from("name")),
+SMTP_SCHEMA,
+));
+}
+
 config
 }
 
diff --git a/proxmox-notify/src/endpoints/common/mail.rs 
b/proxmox-notify/src/endpoints/common/mail.rs
new file mode 100644
index 000..0929d7c
--- /dev/null
+++ b/proxmox-notify/src/endpoints/common/mail.rs
@@ -0,0 +1,24 @@
+use std::collections::HashSet;
+
+use crate::context;
+
+pub(crate) fn get_recipients(
+email_addrs: Option<&[String]>,
+users: Option<&[String]>,
+) -> HashSet {
+let mut recipients = HashSet::new();
+
+if let Some(mailto_addrs) = email_addrs {
+for addr in mailto_addrs {
+recipients.insert(addr.clone());
+}
+}
+if let Some(users) = users {
+for user in users {
+if let Some(addr) = context::context().lookup_email_for_user(user) 
{
+recipients.insert(addr);
+}
+}
+}
+recipients
+}
diff --git a/proxmox-notify/src/endpoints/common/mod.rs 
b/proxmox-notify/src/endpoints/common/mod.rs
new file mode 100644
index 000..60e0761
--- /dev/null
+++ b/proxmox-notify/src/endpoints/common/mod.rs
@@ -0,0 +1,2 @@
+#[cfg(any(feature = "sendmail", feature = "smtp"))]
+pub(crate) mod mail;
diff --git a/proxmox-notify/src/endpoints/mod.rs 
b/proxmox-not

Re: [pve-devel] [RFC pve-network] do not remove DHCP mapping on stop

2023-11-08 Thread DERUMIER, Alexandre
Hi Stefan !

> I wonder if we couldn't add a property on subnet or dhcp,
> where user could choose between ephemeral ip (create a vm start /
> delete at vm stop),
> 
> or reserved ip
> 
> (reserved a vm|nic create,  deleted a vm|nic delete)
> 

>>That sounds like something we could implement. I've talked to Stefan
>>Lendl today and we thought that ephemeral is something we can leave
>>out
>>for now and implement later (although we definitely wanted to add
>>support for it!)

ok, no problem. 

>>Since we want to have at least basic DHCP support in the next release
>>(which is not far away) we decided to focus on a minimal featureset
>>first and then build upon it.

Could it be possible to send merged series with both your current work
and Stefan Lendl work ?

I'm a bit lost in the mailing.


I'll try to update external ipam code with your current patches.
I'm currently doing tests with netbox, it seem than ip ranges is
supported now, and finding free ip in ranges too.
I'll look at phpipam.





___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2 3/3] ui: bulk actions: add clear filters button

2023-11-08 Thread Dominik Csapak
to be able to clear all of them at once

Signed-off-by: Dominik Csapak 
---
no changes
 www/manager6/window/BulkAction.js | 26 ++
 1 file changed, 26 insertions(+)

diff --git a/www/manager6/window/BulkAction.js 
b/www/manager6/window/BulkAction.js
index cc561bd6..d3650f83 100644
--- a/www/manager6/window/BulkAction.js
+++ b/www/manager6/window/BulkAction.js
@@ -174,6 +174,13 @@ Ext.define('PVE.window.BulkAction', {
let tagList = Object.keys(tagMap).map(key => ({ value: key }));
let haList = Object.keys(haMap).map(key => [key, key]);
 
+   let clearFilters = function() {
+   me.down('#namefilter').setValue('');
+   ['name', 'status', 'pool', 'type', 'hastate', 'includetag', 
'excludetag', 'vmid'].forEach((filter) => {
+   me.down(`#${filter}filter`).setValue('');
+   });
+   };
+
let filterChange = function() {
let nameValue = me.down('#namefilter').getValue();
let filterCount = 0;
@@ -210,10 +217,13 @@ Ext.define('PVE.window.BulkAction', {
}
 
let fieldSet = me.down('#filters');
+   let clearBtn = me.down('#clearBtn');
if (filterCount) {
fieldSet.setTitle(Ext.String.format(gettext('Filters ({0})'), 
filterCount));
+   clearBtn.setDisabled(false);
} else {
fieldSet.setTitle(gettext('Filters'));
+   clearBtn.setDisabled(true);
}
 
let filterFn = function(value) {
@@ -414,6 +424,22 @@ Ext.define('PVE.window.BulkAction', {
},
],
},
+   {
+   xtype: 'container',
+   layout: {
+   type: 'vbox',
+   align: 'end',
+   },
+   items: [
+   {
+   xtype: 'button',
+   itemId: 'clearBtn',
+   text: gettext('Clear Filters'),
+   disabled: true,
+   handler: clearFilters,
+   },
+   ],
+   },
],
},
],
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v2 1/3] ui: bulk actions: reorder fields

2023-11-08 Thread Dominik Csapak
to use less vertical space

also remove the local-storage warning since this is not that helpful

Signed-off-by: Dominik Csapak 
---
new in v2
 www/manager6/window/BulkAction.js | 75 +--
 1 file changed, 41 insertions(+), 34 deletions(-)

diff --git a/www/manager6/window/BulkAction.js 
b/www/manager6/window/BulkAction.js
index 949e167e..950b454d 100644
--- a/www/manager6/window/BulkAction.js
+++ b/www/manager6/window/BulkAction.js
@@ -55,51 +55,52 @@ Ext.define('PVE.window.BulkAction', {
if (me.action === 'migrateall') {
items.push(
{
-   xtype: 'pveNodeSelector',
-   name: 'target',
-   disallowedNodes: [me.nodename],
-   fieldLabel: gettext('Target node'),
-   allowBlank: false,
-   onlineValidator: true,
-   },
-   {
-   xtype: 'proxmoxintegerfield',
-   name: 'maxworkers',
-   minValue: 1,
-   maxValue: 100,
-   value: 1,
-   fieldLabel: gettext('Parallel jobs'),
-   allowBlank: false,
+   xtype: 'fieldcontainer',
+   layout: 'hbox',
+   items: [{
+   flex: 1,
+   xtype: 'pveNodeSelector',
+   name: 'target',
+   disallowedNodes: [me.nodename],
+   fieldLabel: gettext('Target node'),
+   labelWidth: 200,
+   allowBlank: false,
+   onlineValidator: true,
+   padding: '0 10 0 0',
+   },
+   {
+   xtype: 'proxmoxintegerfield',
+   name: 'maxworkers',
+   minValue: 1,
+   maxValue: 100,
+   value: 1,
+   fieldLabel: gettext('Parallel jobs'),
+   allowBlank: false,
+   flex: 1,
+   }],
},
{
xtype: 'fieldcontainer',
-   fieldLabel: gettext('Allow local disk migration'),
layout: 'hbox',
items: [{
xtype: 'proxmoxcheckbox',
+   fieldLabel: gettext('Allow local disk migration'),
name: 'with-local-disks',
+   labelWidth: 200,
checked: true,
uncheckedValue: 0,
-   listeners: {
-   change: (cb, val) => 
me.down('#localdiskwarning').setVisible(val),
-   },
+   flex: 1,
+   padding: '0 10 0 0',
},
{
-   itemId: 'localdiskwarning',
+   itemId: 'lxcwarning',
xtype: 'displayfield',
-   flex: 1,
-   padding: '0 0 0 10',
userCls: 'pmx-hint',
-   value: 'Note: Migration with local disks might take 
long.',
+   value: 'Warning: Running CTs will be migrated in 
Restart Mode.',
+   hidden: true, // only visible if running container 
chosen
+   flex: 1,
}],
},
-   {
-   itemId: 'lxcwarning',
-   xtype: 'displayfield',
-   userCls: 'pmx-hint',
-   value: 'Warning: Running CTs will be migrated in Restart 
Mode.',
-   hidden: true, // only visible if running container chosen
-   },
);
} else if (me.action === 'startall') {
items.push({
@@ -108,25 +109,31 @@ Ext.define('PVE.window.BulkAction', {
value: 1,
});
} else if (me.action === 'stopall') {
-   items.push(
-   {
+   items.push({
+   xtype: 'fieldcontainer',
+   layout: 'hbox',
+   items: [{
xtype: 'proxmoxcheckbox',
name: 'force-stop',
+   labelWidth: 120,
fieldLabel: gettext('Force Stop'),
boxLabel: gettext('Force stop guest if shutdown times 
out.'),
checked: true,
uncheckedValue: 0,
+   flex: 1,
},
{
xtype: 'proxmoxintegerfield',
name: 'timeout',
fieldLabel: gettext('Timeout (s)'),
+   labelWidth: 120,
emptyText: '180',
minValue: 0,
maxValue: 7200,
allowBlank: true,
-  

[pve-devel] [PATCH manager v2 2/3] ui: bulk actions: rework filters and include tags

2023-11-08 Thread Dominik Csapak
This moves the filters out of the grid header for the BulkActions and
puts them into their own fieldset above the grid. With that, we can
easily include a tags filter (one include and one exclude list).

The filter fieldset is collapsible and shows the active filters in
parenthesis. aside from that the filter should be the same as before.

To achieve the result, we regenerate the filterFn on every change of
every filter field, and set it with an 'id' so that only that filter is
overridden each time.

To make this work, we have to change three tiny details:
* change the counting in the 'getErrors' of the VMSelector, so that we
  actually get the count of selected VMs, not the one from the
  selectionModel
* override the plugins to '' in the BulkAction windows, so that e.g. in
  the backup window we still have the filters in the grid header
  (we could add a filter box there too, but that is already very crowded
  and would take up too much space for now)

Signed-off-by: Dominik Csapak 
---
changes from v1:
* no changing of labelWidths, happens now in previous patch
* use circle tags instead of full
* correctly refresh the lxc warning in case the filter changed
* make type and status filter not mulitselect
* remove vmid filter

 www/manager6/form/VMSelector.js   |  10 +-
 www/manager6/window/BulkAction.js | 294 +-
 2 files changed, 297 insertions(+), 7 deletions(-)

diff --git a/www/manager6/form/VMSelector.js b/www/manager6/form/VMSelector.js
index d59847f2..43e91749 100644
--- a/www/manager6/form/VMSelector.js
+++ b/www/manager6/form/VMSelector.js
@@ -18,6 +18,8 @@ Ext.define('PVE.form.VMSelector', {
sorters: 'vmid',
 },
 
+userCls: 'proxmox-tags-circle',
+
 columnsDeclaration: [
{
header: 'ID',
@@ -80,6 +82,12 @@ Ext.define('PVE.form.VMSelector', {
},
},
},
+   {
+   header: gettext('Tags'),
+   dataIndex: 'tags',
+   renderer: tags => PVE.Utils.renderTags(tags, 
PVE.UIOptions.tagOverrides),
+   flex: 1,
+   },
{
header: 'HA ' + gettext('Status'),
dataIndex: 'hastate',
@@ -186,7 +194,7 @@ Ext.define('PVE.form.VMSelector', {
 getErrors: function(value) {
let me = this;
if (!me.isDisabled() && me.allowBlank === false &&
-   me.getSelectionModel().getCount() === 0) {
+   me.getValue().length === 0) {
me.addBodyCls(['x-form-trigger-wrap-default', 
'x-form-trigger-wrap-invalid']);
return [gettext('No VM selected')];
}
diff --git a/www/manager6/window/BulkAction.js 
b/www/manager6/window/BulkAction.js
index 950b454d..cc561bd6 100644
--- a/www/manager6/window/BulkAction.js
+++ b/www/manager6/window/BulkAction.js
@@ -136,6 +136,289 @@ Ext.define('PVE.window.BulkAction', {
});
}
 
+   let refreshLxcWarning = function(vmids, records) {
+   let showWarning = records.some(
+   item => vmids.includes(item.data.vmid) && item.data.type === 
'lxc' && item.data.status === 'running',
+   );
+   me.down('#lxcwarning').setVisible(showWarning);
+   };
+
+   let defaultStatus = me.action === 'migrateall' ? '' : me.action === 
'startall' ? 'stopped' : 'running';
+
+   let statusMap = [];
+   let poolMap = [];
+   let haMap = [];
+   let tagMap = [];
+   PVE.data.ResourceStore.each((rec) => {
+   if (['qemu', 'lxc'].indexOf(rec.data.type) !== -1) {
+   statusMap[rec.data.status] = true;
+   }
+   if (rec.data.type === 'pool') {
+   poolMap[rec.data.pool] = true;
+   }
+   if (rec.data.hastate !== "") {
+   haMap[rec.data.hastate] = true;
+   }
+   if (rec.data.tags !== "") {
+   rec.data.tags.split(/[,; ]/).forEach((tag) => {
+   if (tag !== '') {
+   tagMap[tag] = true;
+   }
+   });
+   }
+   });
+
+   let statusList = Object.keys(statusMap).map(key => [key, key]);
+   statusList.unshift(['', gettext('All')]);
+   let poolList = Object.keys(poolMap).map(key => [key, key]);
+   let tagList = Object.keys(tagMap).map(key => ({ value: key }));
+   let haList = Object.keys(haMap).map(key => [key, key]);
+
+   let filterChange = function() {
+   let nameValue = me.down('#namefilter').getValue();
+   let filterCount = 0;
+
+   if (nameValue !== '') {
+   filterCount++;
+   }
+
+   let arrayFiltersData = [];
+   ['pool', 'hastate'].forEach((filter) => {
+   let selected = me.down(`#${filter}filter`).getValue() ?? [];
+   if (selected.length) {
+   filterCount++;
+   arrayFiltersData.push([filter, [...selected]]);
+   }
+   });
+
+   let singleFiltersData = [];
+  

[pve-devel] [PATCH manager 2/2] ui: add tooltips to non-full tags globally

2023-11-08 Thread Dominik Csapak
by using the delegate function of ExtJS' tooltips on the global
Workspace element and using the proper css selectors

this way, we can limit the tooltips to the non-full ones
(in contrast to using data-qtip on the element, which would
always be show, even for tags with the 'full' style)

Signed-off-by: Dominik Csapak 
---
honestly not sure if this has any performance impacts, since it somehow
has to attach an event handler to the global object which has to do work
now every time to check for the css classes...

 www/manager6/Workspace.js | 20 
 1 file changed, 20 insertions(+)

diff --git a/www/manager6/Workspace.js b/www/manager6/Workspace.js
index 18d574b7..cb998e8a 100644
--- a/www/manager6/Workspace.js
+++ b/www/manager6/Workspace.js
@@ -526,6 +526,26 @@ Ext.define('PVE.StdWorkspace', {
modalWindows.forEach(win => win.alignTo(me, 'c-c'));
}
});
+
+   let tagSelectors = [];
+   ['circle', 'dense'].forEach((style) => {
+   ['dark', 'light'].forEach((variant) => {
+   tagSelectors.push(`.proxmox-tags-${style} 
.proxmox-tag-${variant}`);
+   });
+   });
+
+   Ext.create('Ext.tip.ToolTip', {
+   target: me.el,
+   delegate: tagSelectors.join(', '),
+   trackMouse: true,
+   renderTo: Ext.getBody(),
+   listeners: {
+   beforeshow: function(tip) {
+   let tag = Ext.htmlEncode(tip.triggerElement.innerHTML);
+   tip.update(tag);
+   },
+   },
+   });
 },
 });
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager 1/2] ui: resource tree: limit tooltip to icon and text

2023-11-08 Thread Dominik Csapak
and exclude the tags for that, since we want the tags to have their own
tooltips

Signed-off-by: Dominik Csapak 
---
not really sure if we want to do this, since creating a custom tree
column type just for that seems overkill. also we have to touch private
properties of that here to change it which i don't really like

 www/manager6/tree/ResourceTree.js | 87 ---
 1 file changed, 80 insertions(+), 7 deletions(-)

diff --git a/www/manager6/tree/ResourceTree.js 
b/www/manager6/tree/ResourceTree.js
index 54c6403d..f23f9f99 100644
--- a/www/manager6/tree/ResourceTree.js
+++ b/www/manager6/tree/ResourceTree.js
@@ -103,10 +103,15 @@ Ext.define('PVE.tree.ResourceTree', {
 },
 
 setIconCls: function(info) {
+   let me = this;
let cls = PVE.Utils.get_object_icon_class(info.type, info);
if (cls !== '') {
info.iconCls = cls;
}
+   let tip = me.getToolTip(info);
+   if (tip) {
+   info.iconAttrs = `data-qtip="${tip}"`;
+   }
 },
 
 // add additional elements to text. Currently only the usage indicator for 
storages
@@ -130,15 +135,22 @@ Ext.define('PVE.tree.ResourceTree', {
info.text = `${info.name} (${String(info.vmid)})`;
}
}
+   info.text = status + info.text;
 
-   info.text += PVE.Utils.renderTags(info.tags, 
PVE.UIOptions.tagOverrides);
+   let tip = me.getToolTip(info);
+   if (tip) {
+   info.text = `${info.text}`;
+   }
 
-   info.text = status + info.text;
+   info.text += PVE.Utils.renderTags(info.tags, 
PVE.UIOptions.tagOverrides);
 },
 
-setToolTip: function(info) {
+getToolTip: function(info) {
+   if (info.tip) {
+   return info.tip;
+   }
if (info.type === 'pool' || info.groupbyid !== undefined) {
-   return;
+   return undefined;
}
 
let qtips = [gettext('Status') + ': ' + (info.qmpstatus || 
info.status)];
@@ -149,7 +161,9 @@ Ext.define('PVE.tree.ResourceTree', {
qtips.push(gettext('HA State') + ": " + info.hastate);
}
 
-   info.qtip = qtips.join(', ');
+   let tip = qtips.join(', ');
+   info.tip = tip;
+   return tip;
 },
 
 // private
@@ -158,7 +172,6 @@ Ext.define('PVE.tree.ResourceTree', {
 
me.setIconCls(info);
me.setText(info);
-   me.setToolTip(info);
 
if (info.groupbyid) {
info.text = info.groupbyid;
@@ -315,7 +328,6 @@ Ext.define('PVE.tree.ResourceTree', {
Ext.apply(info, item.data);
me.setIconCls(info);
me.setText(info);
-   me.setToolTip(info);
olditem.commit();
}
if ((!item || moved) && olditem.isLeaf()) {
@@ -471,4 +483,65 @@ Ext.define('PVE.tree.ResourceTree', {
rstore.startUpdate();
 },
 
+hideHeaders: true,
+
+columns: [
+   {
+   xtype: 'pveResourceTreeColumn',
+   text: 'Name',
+   flex: 1,
+   dataIndex: 'text',
+   },
+],
+
+});
+
+// we want to change how the icon div is rendered, so we have to subclass
+// the whole tree column class to change the cellTpl and available data
+Ext.define('PVE.tree.ResourceTreeColumn', {
+extend: 'Ext.tree.Column',
+alias: 'widget.pveResourceTreeColumn',
+
+// copied from Ext.tree.Column source except indentation and one detail 
(see inline comment)
+cellTpl: [
+   '',
+   'lineempty" 
role="presentation">',
+   '',
+   '-end-plus 
{expanderCls}" role="presentation">',
+   '',
+   ' 
{checkboxCls}-checked">',
+   '',
+   '',
+   '',
+   'style="font-family:{glyphFontFamily}"',
+   '',
+   '>{glyph}',
+   '',
+   '',
+   '',
+   '',
+   ' role="presentation" class="{childCls} {baseIconCls} 
{customIconCls} ',
+   '{baseIconCls}-leafparent-expandedparent {iconCls}" ',
+   // the line below has 'iconAttrs' added to it
+   'style="background-image:url({icon})"/>{iconAttrs}>',
+   '',
+   '',
+   '{value}',
+   '',
+   '{value}',
+   '',
+],
+
+initTemplateRendererData: function(value, metaData, record, rowIdx, 
colIdx, store, view) {
+   let me = this;
+   let data = me.callParent(arguments);
+   data.iconAttrs = record.data.iconAttrs ?? '';
+   return data;
+},
 });
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v4 many 00/11] notifications: add SMTP endpoint

2023-11-08 Thread Dietmar Maurer
> This patch series adds support for a new notification endpoint type,
> smtp. As the name suggests, this new endpoint allows PVE to talk
> to SMTP server directly, without using the system's MTA (postfix).

Isn't this totally unreliable? What if the server responds with a 
temporary error code? (An MTA retries several times).


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC pve-network 2/6] always generate dnsmasq ethers file

2023-11-08 Thread DERUMIER, Alexandre
Personnaly, I really think that we shouldn't generate the whole
dhcp config (reading the full ipam db), each time we allocate a single
ip.


With external ipam, that mean 1api call for each subnet, it can be
really slow.
(for example, I have 400 subnets in productions)

and this is only done on local node, so we'll to regenerated in other
node if we migrate vm, restore backup, HA moving vm,



I think that mac-ip should be added at vm start,
it's really more simple and light.





 Message initial 
De: Stefan Lendl 
Répondre à: Proxmox VE development discussion 
À: pve-devel@lists.proxmox.com
Objet: [pve-devel] [RFC pve-network 2/6] always generate dnsmasq ethers
file
Date: 27/10/2023 13:29:56

Makes dnsmasq stateless and can be generated from the IPAM.
On dhcp_add_ip always generate the entire ethers file.

Signed-off-by: Stefan Lendl 
---
 src/PVE/Network/SDN/Dhcp.pm | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Network/SDN/Dhcp.pm b/src/PVE/Network/SDN/Dhcp.pm
index b92c73a..b854cce 100644
--- a/src/PVE/Network/SDN/Dhcp.pm
+++ b/src/PVE/Network/SDN/Dhcp.pm
@@ -87,8 +87,14 @@ sub add_mapping {
 
        next if !$ip;
 
+       # generates ethers file every time
+       my $ipam_db = $ipam_plugin->read_db();
+       my $dbzone = $ipam_db->{zones}->{$subnet_config->{zone}};
+       my $dbsubnet = $dbzone->{subnets}->{$subnet_config-
>{cidr}};
+       my $dbsubnet_ips = $dbsubnet->{ips};
+
        my $dhcp_plugin = PVE::Network::SDN::Dhcp::Plugin-
>lookup($dhcp_config->{type});
-       $dhcp_plugin->add_ip_mapping($dhcp_config, $mac, $ip);
+       $dhcp_plugin->generate_config($dhcp_config,
$dbsubnet_ips);
 
        return $ip;
    }

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC qemu-server 6/6] DHCP mappings on vNIC add/remove

2023-11-08 Thread DERUMIER, Alexandre
From my previous mail,

Here I'll allocate only in ipam,   but not generate dhcp config.


 Message initial 
De: Stefan Lendl 
Répondre à: Proxmox VE development discussion 
À: pve-devel@lists.proxmox.com
Objet: [pve-devel] [RFC qemu-server 6/6] DHCP mappings on vNIC
add/remove
Date: 27/10/2023 13:30:00

add DHCP mapping on vNIC add/update and VM clone (new mac)
remove DHCP mapping on vNIC delete and VM destroy

Signed-off-by: Stefan Lendl 
---
 PVE/API2/Qemu.pm  | 25 +
 PVE/QemuServer.pm |  2 ++
 2 files changed, 27 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 38bdaab..1b16fa5 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -49,6 +49,8 @@ use PVE::SSHInfo;
 use PVE::Replication;
 use PVE::StorageTunnel;
 
+use PVE::Network::SDN;
+
 BEGIN {
 if (!$ENV{PVE_GENERATING_DOCS}) {
    require PVE::HA::Env::PVE2;
@@ -1804,6 +1806,9 @@ my $update_vm_api  = sub {
        }
        PVE::QemuConfig->add_to_pending_delete($conf,
$opt, $force);
        PVE::QemuConfig->write_config($vmid, $conf);
+
+       my $net = PVE::QemuServer::parse_net($conf-
>{$opt});
+       PVE::Network::SDN::Dhcp::remove_mapping($net-
>{bridge}, $net->{macaddr});
    } else {
        PVE::QemuConfig->add_to_pending_delete($conf,
$opt, $force);
        PVE::QemuConfig->write_config($vmid, $conf);
@@ -1881,6 +1886,18 @@ my $update_vm_api  = sub {
    );
        }
        $conf->{pending}->{$opt} = $param->{$opt};
+
+       my $new_net = PVE::QemuServer::parse_net($param-
>{$opt});
+       if (exists $conf->{$opt}) {
+   my $old_net =
PVE::QemuServer::parse_net($conf->{$opt});
+   if ($old_net->{bridge} ne $new_net->{bridge}
or
+       $old_net->{macaddr} ne $new_net-
>{macaddr}) {
+       print "Bridge or MAC changed: $conf-
>{$opt} -> $param->{$opt}\n";
+      
PVE::Network::SDN::Dhcp::remove_mapping($old_net->{bridge}, $old_net-
>{macaddr});
+   }
+       }
+       PVE::Network::SDN::Dhcp::add_mapping($vmid,
$new_net->{bridge}, $new_net->{macaddr});
+
    } else {
        $conf->{pending}->{$opt} = $param->{$opt};
 
@@ -3763,6 +3780,14 @@ __PACKAGE__->register_method({
 
    PVE::QemuConfig->write_config($newid, $newconf);
 
+   foreach my $opt (keys %$newconf) {
+       if ($opt =~ m/^net(\d+)$/) {
+   my $value = $newconf->{$opt};
+   my $net = PVE::QemuServer::parse_net($value);
+   PVE::Network::SDN::Dhcp::add_mapping($newid,
$net->{bridge}, $net->{macaddr});
+       }
+   }
+
    if ($target) {
        # always deactivate volumes - avoid lvm LVs to be
active on several nodes
        PVE::Storage::deactivate_volumes($storecfg,
$vollist, $snapname) if !$running;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 710259b..8a63ec3 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2337,6 +2337,8 @@ sub destroy_vm {
    });
 }
 
+    cleanup_sdn_dhcp($vmid, $conf);
+
 if (defined $replacement_conf) {
    PVE::QemuConfig->write_config($vmid, $replacement_conf);
 } else {

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [WIP v2 pve-network 05/10] dhcp: add DHCP plugin for dnsmasq

2023-11-08 Thread DERUMIER, Alexandre
>>/etc/default/dnsmasq.
>>This file specifies the configuration directory for the dnsmasq
>>instance (/etc/dnsmasq.d/). It also sets the configuration
>>file to /dev/null so the default configuration from the package has
>>no influence on the dnsmasq configuration.
>>
>>/etc/dnsmasq.d//00-default.conf



About ,  

I think we should have 1 dhcp_id by zone. (or it could even be the zone
name).

As we can have multiple zone, with same subnet, but in differents vrf.

(this also need a different dnsmasq process in a different vrf)



Also, currently, I'm not sure why we need to define the dhcp in
/etc/pve/sdn/dhcp.cfg ?

couldn't we simply add something like : "dhcp:1"  or "dhcp:dnsmasq"  on
the zone ?



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [WIP pve-network 2/3] dhcp : add|del_ip_mapping: only add|del dhcp reservervation

2023-11-08 Thread Alexandre Derumier
don't try to add|del ip from ipam here

Signed-off-by: Alexandre Derumier 
---
 src/PVE/Network/SDN/Dhcp.pm | 75 +
 1 file changed, 18 insertions(+), 57 deletions(-)

diff --git a/src/PVE/Network/SDN/Dhcp.pm b/src/PVE/Network/SDN/Dhcp.pm
index e4c4078..1c32fec 100644
--- a/src/PVE/Network/SDN/Dhcp.pm
+++ b/src/PVE/Network/SDN/Dhcp.pm
@@ -21,72 +21,33 @@ PVE::Network::SDN::Dhcp::Dnsmasq->register();
 PVE::Network::SDN::Dhcp::Dnsmasq->init();
 
 sub add_mapping {
-my ($vmid, $vnet, $mac) = @_;
+my ($vmid, $vnetid, $mac, $ip) = @_;
 
-my $vnet_config = PVE::Network::SDN::Vnets::get_vnet($vnet, 1);
-return if !$vnet_config;
+my $subnets = PVE::Network::SDN::Vnets::get_subnets($vnetid, 1);
+my $vnet = PVE::Network::SDN::Vnets::get_vnet($vnetid);
+my $zoneid = $vnet->{zone};
+my $zone = PVE::Network::SDN::Zones::get_zone($zoneid);
 
-my $subnets = PVE::Network::SDN::Vnets::get_subnets($vnet, 1);
+my $dhcptype = $zone->{dhcp};
+return if !$dhcptype;
 
-for my $subnet_id (keys %{$subnets}) {
-   my $subnet_config = $subnets->{$subnet_id};
-
-   next if !$subnet_config->{'dhcp-range'};
-
-   foreach my $dhcp_range (@{$subnet_config->{'dhcp-range'}}) {
-   my $dhcp_config = 
PVE::Network::SDN::Dhcp::get_dhcp($dhcp_range->{server});
-
-   if (!$dhcp_config) {
-   warn "Cannot find configuration for DHCP server 
$dhcp_range->{server}";
-   next;
-   }
-
-   my $ipam_plugin = PVE::Network::SDN::Ipams::Plugin->lookup('pve');
-
-   my $data = {
-   vmid => $vmid,
-   mac => $mac,
-   };
-
-   my $ip = $ipam_plugin->add_dhcp_ip($subnet_config, $dhcp_range, 
$data);
-
-   next if !$ip;
-
-   my $dhcp_plugin = 
PVE::Network::SDN::Dhcp::Plugin->lookup($dhcp_config->{type});
-   $dhcp_plugin->add_ip_mapping($dhcp_config, $mac, $ip);
-
-   return $ip;
-   }
-}
+my $dhcp_plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($dhcptype);
+$dhcp_plugin->add_ip_mapping($zoneid, $mac, $ip);
 }
 
 sub remove_mapping {
-my ($vnet, $mac) = @_;
-
-my $vnet_config = PVE::Network::SDN::Vnets::get_vnet($vnet, 1);
-return if !$vnet_config;
-
-my $subnets = PVE::Network::SDN::Vnets::get_subnets($vnet, 1);
-
-for my $subnet_id (keys %{$subnets}) {
-   my $subnet_config = $subnets->{$subnet_id};
-   next if !$subnet_config->{'dhcp-range'};
+my ($vnetid, $mac) = @_;
 
-   my $ipam_plugin = PVE::Network::SDN::Ipams::Plugin->lookup('pve');
-   $ipam_plugin->del_dhcp_ip($subnet_config, $mac);
+my $subnets = PVE::Network::SDN::Vnets::get_subnets($vnetid, 1);
+my $vnet = PVE::Network::SDN::Vnets::get_vnet($vnetid);
+my $zoneid = $vnet->{zone};
+my $zone = PVE::Network::SDN::Zones::get_zone($zoneid);
 
-   foreach my $dhcp_range (@{$subnet_config->{'dhcp-range'}}) {
-   my $dhcp_config = 
PVE::Network::SDN::Dhcp::get_dhcp($dhcp_range->{server});
+my $dhcptype = $zone->{dhcp};
+return if !$dhcptype;
 
-   if (!$dhcp_config) {
-   warn "Cannot find configuration for DHCP server 
$dhcp_range->{server}";
-   next;
-   }
-
-   my $dhcp_plugin = 
PVE::Network::SDN::Dhcp::Plugin->lookup($dhcp_config->{type});
-   $dhcp_plugin->del_ip_mapping($dhcp_config, $mac);
-   }
-}
+my $dhcp_plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($dhcptype);
+$dhcp_plugin->del_ip_mapping($zoneid, $mac);
 }
 
 sub regenerate_config {
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [WIP pve-network 1/3] define dhcpplugin in zone

2023-11-08 Thread Alexandre Derumier
simple: zone1
ipam pve
dhcp dnsmasq

simple: zone2
ipam pve
dhcp dnsmasq

This generate 1 dhcp by zone/vrf.

Don't use dhcp.cfg anymore
It's reuse node filtering from zone.

same subnets in 2 differents zones can't use
same dhcp server

Signed-off-by: Alexandre Derumier 
---
 src/PVE/API2/Network/SDN/Zones.pm |  1 +
 src/PVE/Network/SDN.pm|  4 +-
 src/PVE/Network/SDN/Dhcp.pm   | 91 +++
 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm   | 32 
 src/PVE/Network/SDN/Dhcp/Plugin.pm| 28 ++-
 src/PVE/Network/SDN/SubnetPlugin.pm   |  4 -
 src/PVE/Network/SDN/Zones/SimplePlugin.pm |  7 +-
 7 files changed, 54 insertions(+), 113 deletions(-)

diff --git a/src/PVE/API2/Network/SDN/Zones.pm 
b/src/PVE/API2/Network/SDN/Zones.pm
index 4c8b7e1..1c3356e 100644
--- a/src/PVE/API2/Network/SDN/Zones.pm
+++ b/src/PVE/API2/Network/SDN/Zones.pm
@@ -99,6 +99,7 @@ __PACKAGE__->register_method ({
reversedns => { type => 'string', optional => 1},
dnszone => { type => 'string', optional => 1},
ipam => { type => 'string', optional => 1},
+   dhcp => { type => 'string', optional => 1},
pending => { optional => 1},
state => { type => 'string', optional => 1},
nodes => { type => 'string', optional => 1},
diff --git a/src/PVE/Network/SDN.pm b/src/PVE/Network/SDN.pm
index 5c059bc..c306527 100644
--- a/src/PVE/Network/SDN.pm
+++ b/src/PVE/Network/SDN.pm
@@ -150,15 +150,13 @@ sub commit_config {
 my $zones_cfg = PVE::Network::SDN::Zones::config();
 my $controllers_cfg = PVE::Network::SDN::Controllers::config();
 my $subnets_cfg = PVE::Network::SDN::Subnets::config();
-my $dhcp_cfg = PVE::Network::SDN::Dhcp::config();
 
 my $vnets = { ids => $vnets_cfg->{ids} };
 my $zones = { ids => $zones_cfg->{ids} };
 my $controllers = { ids => $controllers_cfg->{ids} };
 my $subnets = { ids => $subnets_cfg->{ids} };
-my $dhcp = { ids => $dhcp_cfg->{ids} };
 
-$cfg = { version => $version, vnets => $vnets, zones => $zones, 
controllers => $controllers, subnets => $subnets, dhcps => $dhcp };
+$cfg = { version => $version, vnets => $vnets, zones => $zones, 
controllers => $controllers, subnets => $subnets };
 
 cfs_write_file($running_cfg, $cfg);
 }
diff --git a/src/PVE/Network/SDN/Dhcp.pm b/src/PVE/Network/SDN/Dhcp.pm
index b92c73a..e4c4078 100644
--- a/src/PVE/Network/SDN/Dhcp.pm
+++ b/src/PVE/Network/SDN/Dhcp.pm
@@ -20,41 +20,6 @@ PVE::Network::SDN::Dhcp::Plugin->init();
 PVE::Network::SDN::Dhcp::Dnsmasq->register();
 PVE::Network::SDN::Dhcp::Dnsmasq->init();
 
-sub config {
-my ($running) = @_;
-
-if ($running) {
-   my $cfg = PVE::Network::SDN::running_config();
-   return $cfg->{dhcps};
-}
-
-return cfs_read_file('sdn/dhcp.cfg');
-}
-
-sub sdn_dhcps_config {
-my ($cfg, $id, $noerr) = @_;
-
-die "No DHCP ID specified!\n" if !$id;
-
-my $dhcp_config = $cfg->{ids}->{$id};
-die "SDN DHCP '$id' does not exist!\n" if (!$noerr && !$dhcp_config);
-
-if ($dhcp_config) {
-   $dhcp_config->{id} = $id;
-}
-
-return $dhcp_config;
-}
-
-sub get_dhcp {
-my ($dhcp_id, $running) = @_;
-
-return if !$dhcp_id;
-
-my $cfg = PVE::Network::SDN::Dhcp::config($running);
-return PVE::Network::SDN::Dhcp::sdn_dhcps_config($cfg, $dhcp_id, 1);
-}
-
 sub add_mapping {
 my ($vmid, $vnet, $mac) = @_;
 
@@ -127,58 +92,52 @@ sub remove_mapping {
 sub regenerate_config {
 my ($reload) = @_;
 
-my $dhcps = PVE::Network::SDN::Dhcp::config();
-my $subnets = PVE::Network::SDN::Subnets::config();
+my $cfg = PVE::Network::SDN::running_config();
 
-my $plugins = PVE::Network::SDN::Dhcp::Plugin->lookup_types();
+my $zone_cfg = $cfg->{zones};
+my $subnet_cfg = $cfg->{subnets};
+return if !$zone_cfg && !$subnet_cfg;
 
 my $nodename = PVE::INotify::nodename();
 
+my $plugins = PVE::Network::SDN::Dhcp::Plugin->lookup_types();
+
 foreach my $plugin_name (@$plugins) {
my $plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($plugin_name);
-
eval { $plugin->before_regenerate() };
die "Could not run before_regenerate for DHCP plugin $plugin_name $@\n" 
if $@;
 }
 
-foreach my $dhcp_id (keys %{$dhcps->{ids}}) {
-   my $dhcp_config = PVE::Network::SDN::Dhcp::sdn_dhcps_config($dhcps, 
$dhcp_id);
-   my $plugin = 
PVE::Network::SDN::Dhcp::Plugin->lookup($dhcp_config->{type});
+foreach my $zoneid (sort keys %{$zone_cfg->{ids}}) {
+my $zone = $zone_cfg->{ids}->{$zoneid};
+next if defined($zone->{nodes}) && !$zone->{nodes}->{$nodename};
 
-   eval { $plugin->before_configure($dhcp_config) };
-   die "Could not run before_configure for DHCP server $dhcp_id $@\n" if 
$@;
-}
+   m

[pve-devel] [WIP pve-network 3/3] vnet|subnet: add_next_free_ip : implement dhcprange ipam search

2023-11-08 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 src/PVE/Network/SDN/Ipams/PVEPlugin.pm | 12 ++--
 src/PVE/Network/SDN/Ipams/Plugin.pm|  7 +++
 src/PVE/Network/SDN/Subnets.pm | 22 +++---
 src/PVE/Network/SDN/Vnets.pm   |  4 ++--
 4 files changed, 34 insertions(+), 11 deletions(-)

diff --git a/src/PVE/Network/SDN/Ipams/PVEPlugin.pm 
b/src/PVE/Network/SDN/Ipams/PVEPlugin.pm
index fcc8282..9fff52a 100644
--- a/src/PVE/Network/SDN/Ipams/PVEPlugin.pm
+++ b/src/PVE/Network/SDN/Ipams/PVEPlugin.pm
@@ -110,7 +110,7 @@ sub update_ip {
 }
 
 sub add_next_freeip {
-my ($class, $plugin_config, $subnetid, $subnet, $hostname, $mac, 
$description) = @_;
+my ($class, $plugin_config, $subnetid, $subnet, $hostname, $mac, 
$description, $noerr) = @_;
 
 my $cidr = $subnet->{cidr};
 my $network = $subnet->{network};
@@ -156,8 +156,8 @@ sub add_next_freeip {
 return "$freeip/$mask";
 }
 
-sub add_dhcp_ip {
-my ($class, $subnet, $dhcp_range, $data) = @_;
+sub add_range_next_freeip {
+my ($class, $subnet, $range, $data) = @_;
 
 my $cidr = $subnet->{cidr};
 my $zone = $subnet->{zone};
@@ -171,8 +171,8 @@ sub add_dhcp_ip {
my $dbsubnet = $dbzone->{subnets}->{$cidr};
die "subnet '$cidr' doesn't exist in IPAM DB\n" if !$dbsubnet;
 
-   my $ip = new Net::IP ("$dhcp_range->{'start-address'} - 
$dhcp_range->{'end-address'}")
-   or die "Invalid IP address(es) in DHCP Range!\n";
+   my $ip = new Net::IP ("$range->{'start-address'} - 
$range->{'end-address'}")
+   or die "Invalid IP address(es) in Range!\n";
 
do {
my $ip_address = $ip->ip();
@@ -184,7 +184,7 @@ sub add_dhcp_ip {
}
} while (++$ip);
 
-   die "No free IP left in DHCP Range 
$dhcp_range->{'start-address'}:$dhcp_range->{'end-address'}}\n";
+   die "No free IP left in Range 
$range->{'start-address'}:$range->{'end-address'}}\n";
 });
 }
 
diff --git a/src/PVE/Network/SDN/Ipams/Plugin.pm 
b/src/PVE/Network/SDN/Ipams/Plugin.pm
index c96eeda..8d69be4 100644
--- a/src/PVE/Network/SDN/Ipams/Plugin.pm
+++ b/src/PVE/Network/SDN/Ipams/Plugin.pm
@@ -98,6 +98,13 @@ sub add_next_freeip {
 die "please implement inside plugin";
 }
 
+
+sub add_range_next_freeip {
+my ($class, $subnet, $range, $data) = @_;
+
+die "please implement inside plugin";
+}
+
 sub del_ip {
 my ($class, $plugin_config, $subnetid, $subnet, $ip, $noerr) = @_;
 
diff --git a/src/PVE/Network/SDN/Subnets.pm b/src/PVE/Network/SDN/Subnets.pm
index dd9e697..42b84ab 100644
--- a/src/PVE/Network/SDN/Subnets.pm
+++ b/src/PVE/Network/SDN/Subnets.pm
@@ -11,6 +11,7 @@ use PVE::Cluster qw(cfs_read_file cfs_write_file 
cfs_lock_file);
 use PVE::JSONSchema qw(parse_property_string);
 use PVE::Network::SDN::Dns;
 use PVE::Network::SDN::Ipams;
+use PVE::Network::SDN::Zones;
 
 use PVE::Network::SDN::SubnetPlugin;
 PVE::Network::SDN::SubnetPlugin->register();
@@ -203,7 +204,7 @@ sub del_subnet {
 }
 
 sub next_free_ip {
-my ($zone, $subnetid, $subnet, $hostname, $mac, $description, $skipdns) = 
@_;
+my ($zone, $subnetid, $subnet, $hostname, $mac, $description, $skipdns, 
$dhcprange) = @_;
 
 my $cidr = undef;
 my $ip = undef;
@@ -225,9 +226,24 @@ sub next_free_ip {
my $plugin_config = $ipam_cfg->{ids}->{$ipamid};
my $plugin = 
PVE::Network::SDN::Ipams::Plugin->lookup($plugin_config->{type});
eval {
-   $cidr = $plugin->add_next_freeip($plugin_config, $subnetid, 
$subnet, $hostname, $mac, $description);
-   ($ip, undef) = split(/\//, $cidr);
+   if ($dhcprange) {
+   my ($zoneid, $subnet_network, $subnet_mask) = split(/-/, 
$subnetid);
+   my $zone = PVE::Network::SDN::Zones::get_zone($zoneid);
+
+   my $data = {
+   mac => $mac,
+   };
+   
+   foreach my $range (@{$subnet->{'dhcp-range'}}) { 
+   $ip = $plugin->add_range_next_freeip($subnet, $range, 
$data);
+   next if !$ip;
+   }
+   } else {
+   $cidr = $plugin->add_next_freeip($plugin_config, $subnetid, 
$subnet, $hostname, $mac, $description);
+   ($ip, undef) = split(/\//, $cidr);
+   }
};
+
die $@ if $@;
 }
 
diff --git a/src/PVE/Network/SDN/Vnets.pm b/src/PVE/Network/SDN/Vnets.pm
index 39bdda0..4ba3523 100644
--- a/src/PVE/Network/SDN/Vnets.pm
+++ b/src/PVE/Network/SDN/Vnets.pm
@@ -97,7 +97,7 @@ sub get_subnet_from_vnet_cidr {
 }
 
 sub get_next_free_cidr {
-my ($vnetid, $hostname, $mac, $description, $ipversion, $skipdns) = @_;
+my ($vnetid, $hostname, $mac, $description, $ipversion, $skipdns, 
$dhcprange) = @_;
 
 my $vnet = PVE::Network::SDN::Vnets::get_vnet($vnetid);
 my $zoneid = $vnet->{zone};
@@ -118,7 +118,7 @@ sub get_next_free_cidr {
$subnetcount++;
 
eval {
-   $ip = PVE::Network::SDN::Subnets

[pve-devel] [WIP pve-network 0/3] dhcp changes

2023-11-08 Thread Alexandre Derumier
Here my current work, based on wip2 Stefan Hanreich

It's not yet completed, I'll try to send a more complete serie next week

- patch1: remove need of dhcp.cfg, and declare dhcp plugin type in zone 
directly.
  each zone is a dhcpserver , that's make sense, as 2 zones can have same 
subnets in differents vrf.
 
- patch2/3:  need more patches, but my idea is to split the ipam add|del ip && 
the dhcp server ip-mac reservation file genreation.

  at vm|nic create : find a free ip in dhcp range and register ip/mac in ipam
  at vm start : search ip from mac in ipam, and add add it to local dhcp 
reversation file 

  also reuse currently vnet add_next_free_ip, as it's already plugged with dns 
too.





Alexandre Derumier (3):
  define dhcpplugin in zone
  dhcp : add|del_ip_mapping: only add|del dhcp reservervation
  vnet|subnet: add_next_free_ip : implement dhcprange ipam search

 src/PVE/API2/Network/SDN/Zones.pm |   1 +
 src/PVE/Network/SDN.pm|   4 +-
 src/PVE/Network/SDN/Dhcp.pm   | 166 ++
 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm   |  32 ++---
 src/PVE/Network/SDN/Dhcp/Plugin.pm|  28 +---
 src/PVE/Network/SDN/Ipams/PVEPlugin.pm|  12 +-
 src/PVE/Network/SDN/Ipams/Plugin.pm   |   7 +
 src/PVE/Network/SDN/SubnetPlugin.pm   |   4 -
 src/PVE/Network/SDN/Subnets.pm|  22 ++-
 src/PVE/Network/SDN/Vnets.pm  |   4 +-
 src/PVE/Network/SDN/Zones/SimplePlugin.pm |   7 +-
 11 files changed, 106 insertions(+), 181 deletions(-)

-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel