[pve-devel] [PATCH qemu-server 1/4] usb: fix undef error on string match

2024-03-18 Thread Dominik Csapak
'$entry->{host}' can be empty, so we have to check for that before
doing a regex check, otherwise we get ugly errors in the log

Signed-off-by: Dominik Csapak 
---
 PVE/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 8d0ed22c..6e2c8052 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2597,7 +2597,7 @@ sub check_local_resources {
 foreach my $k (keys %$conf) {
if ($k =~ m/^usb/) {
my $entry = parse_property_string('pve-qm-usb', $conf->{$k});
-   next if $entry->{host} =~ m/^spice$/i;
+   next if $entry->{host} && $entry->{host} =~ m/^spice$/i;
if ($entry->{mapping}) {
$add_missing_mapping->('usb', $k, $entry->{mapping});
push @$mapped_res, $k;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 2/4] pci: set 'enable-migration' to on for live-migration marked mapped devices

2024-03-18 Thread Dominik Csapak
the default is 'auto', but for those which are marked as capable for
live migration, we want to explicitly enable that, so we get an early
error on start if the driver does not support that.

Signed-off-by: Dominik Csapak 
---
 PVE/QemuServer/PCI.pm | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 1673041b..402b4e7a 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -429,9 +429,11 @@ sub parse_hostpci {
 
 if ($mapping) {
# we have no ordinary pci id, must be a mapping
-   my $devices = PVE::Mapping::PCI::find_on_current_node($mapping);
+   my ($devices, $config) = 
PVE::Mapping::PCI::find_on_current_node($mapping);
die "PCI device mapping not found for '$mapping'\n" if !$devices || 
!scalar($devices->@*);
 
+   $res->{'live-migration-capable'} = 1 if 
$config->{'live-migration-capable'};
+
for my $device ($devices->@*) {
eval { PVE::Mapping::PCI::assert_valid($mapping, $device) };
die "PCI device mapping invalid (hardware probably changed): $@\n" 
if $@;
@@ -632,6 +634,10 @@ sub print_hostpci_devices {
$devicestr .= ",host=$pcidevice->{id}";
}
 
+   if ($d->{'live-migration-capable'}) {
+   $devicestr .= ",enable-migration=on"
+   }
+
my $mf_addr = $multifunction ? ".$j" : '';
$devicestr .= ",id=${id}${mf_addr}${pciaddr}${mf_addr}";
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs 2/2] qm: resource mapping: document `live-migration-capable` setting

2024-03-18 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 qm.adoc | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index c146ce9..c77cb7b 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1746,6 +1746,12 @@ Currently there are the following options:
   the mapping, the mediated device will be create on the first one where
   there are any available instances of the selected type.
 
+* `live-migration-capable` (PCI): This marks the device as being capable of
+  being live migrated between nodes. This requires driver and hardware support.
+  Only NVIDIA GPUs with recent kernel are known to support this. Note that
+  live migrating passed through devices is an experimental feature and may
+  not work or cause issues.
+
 
 Managing Virtual Machines with `qm`
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 4/4] api: enable live migration for marked mapped pci devices

2024-03-18 Thread Dominik Csapak
They have to be marked as 'live-migration-capable' in the mapping
config, and the driver and qemu must support it.

For the gui checks, we now return a list of 'mapped-with-live-migration'
entries in the migration preflight api call too.

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Qemu.pm   |  5 +
 PVE/QemuMigrate.pm | 12 
 2 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 4ecaeb91..8581a529 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4453,6 +4453,10 @@ __PACKAGE__->register_method({
type => 'array',
description => "List of mapped resources e.g. pci, usb"
},
+   'mapped-with-live-migration' => {
+   type => 'array',
+   description => "List of mapped resources that are marked as 
capable of live-migration",
+   },
},
 },
 code => sub {
@@ -4517,6 +4521,7 @@ __PACKAGE__->register_method({
 
$res->{local_resources} = $local_resources;
$res->{'mapped-resources'} =  [ map { "$_->{key}" } 
$mapped_resources->@* ];
+   $res->{'mapped-with-live-migration'} =  [ map { $_->{'live-migration'} 
? "$_->{key}" : () } $mapped_resources->@* ];
 
return $res;
 
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 6fe8157d..b3570770 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -246,11 +246,15 @@ sub prepare {
 
 if (scalar($mapped_res->@*)) {
my $missing_mappings = $missing_mappings_by_node->{$self->{node}};
-   my $mapped_text = join(", ", map { $_->{key} } $mapped_res->@*);
-   if ($running) {
-   die "can't migrate running VM which uses mapped devices: " . 
$mapped_text . "\n";
-   } elsif (scalar($missing_mappings->@*)) {
+   my $missing_live_mappings = [];
+   for my $res ($mapped_res->@*) {
+   my $name = "$res->{key}:$res->{device}";
+   push $missing_live_mappings->@*, $name if !$res->{'live-migration'};
+   }
+   if (scalar($missing_mappings->@*)) {
die "can't migrate to '$self->{node}': missing mapped devices " . 
join(", ", $missing_mappings->@*) . "\n";
+   } elsif ($running && scalar($missing_live_mappings->@*)) {
+   die "can't live migrate running VM which uses following mapped 
devices: " . join(", ", $missing_live_mappings->@*) . "\n";
} else {
$self->log('info', "migrating VM which uses mapped local devices");
}
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs 1/2] qm: resource mapping: add description for `mdev` option

2024-03-18 Thread Dominik Csapak
in a new section about additional options

Signed-off-by: Dominik Csapak 
---
 qm.adoc | 13 +
 1 file changed, 13 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index 1170dd1..c146ce9 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1734,6 +1734,19 @@ To create mappings `Mapping.Modify` on 
`/mapping//` is necessary
 To use these mappings, `Mapping.Use` on `/mapping//` is necessary
 (in addition to the normal guest privileges to edit the configuration).
 
+Additional Options
+^^
+
+There are additional options when defining a cluster wide resource mapping.
+Currently there are the following options:
+
+* `mdev` (PCI): This marks the PCI device as being capable of providing
+  `mediated devices`. When this is enabled, you can select a type when
+  configuring it on the guest. If multiple PCI devices are selected for
+  the mapping, the mediated device will be create on the first one where
+  there are any available instances of the selected type.
+
+
 Managing Virtual Machines with `qm`
 
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager 1/1] ui: allow configuring and live migration of mapped pci resources

2024-03-18 Thread Dominik Csapak
if the hardware/driver is capable, the admin can now mark a pci device
as 'live-migration-capable', which then tries enabling live migration
for such devices.

mark it as experimental when configuring and in the migrate window

Signed-off-by: Dominik Csapak 
---
 www/manager6/window/Migrate.js| 22 +++---
 www/manager6/window/PCIMapEdit.js | 12 
 2 files changed, 31 insertions(+), 3 deletions(-)

diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index 5473821b..21806d50 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6/window/Migrate.js
@@ -245,6 +245,7 @@ Ext.define('PVE.window.Migrate', {
 
let blockingResources = [];
let mappedResources = migrateStats['mapped-resources'] ?? [];
+   let mappedWithLiveMigration = 
migrateStats['mapped-with-live-migration'] ?? [];
 
for (const res of migrateStats.local_resources) {
if (mappedResources.indexOf(res) === -1) {
@@ -271,14 +272,29 @@ Ext.define('PVE.window.Migrate', {
}
}
 
-   if (mappedResources && mappedResources.length) {
-   if (vm.get('running')) {
+   if (mappedResources && mappedResources.length && vm.get('running')) 
{
+   let allowed = [];
+   let notAllowed = [];
+   for (const resource of mappedResources) {
+   if (mappedWithLiveMigration.indexOf(resource) === -1) {
+   notAllowed.push(resource);
+   } else {
+   allowed.push(resource);
+   }
+   }
+   if (notAllowed.length > 0) {
migration.possible = false;
migration.preconditions.push({
text: Ext.String.format('Can\'t migrate running VM with 
mapped resources: {0}',
-   mappedResources.join(', ')),
+   notAllowed.join(', ')),
severity: 'error',
});
+   } else if (allowed.length > 0) {
+   migration.preconditions.push({
+   text: Ext.String.format('Live-migrating running VM with 
mapped resources (Experimental): {0}',
+   allowed.join(', ')),
+   severity: 'warning',
+   });
}
}
 
diff --git a/www/manager6/window/PCIMapEdit.js 
b/www/manager6/window/PCIMapEdit.js
index d43f04eb..731269a0 100644
--- a/www/manager6/window/PCIMapEdit.js
+++ b/www/manager6/window/PCIMapEdit.js
@@ -242,6 +242,18 @@ Ext.define('PVE.window.PCIMapEditWindow', {
disabled: '{hideComment}',
},
},
+   {
+   xtype: 'proxmoxcheckbox',
+   fieldLabel: gettext('Live Migration Capable'),
+   labelWidth: 200,
+   boxLabel: ` ${gettext('Experimental')}`,
+   reference: 'live-migration-capable',
+   name: 'live-migration-capable',
+   cbind: {
+   deleteEmpty: '{!isCreate}',
+   disabled: '{hideComment}',
+   },
+   },
],
 
columnB: [
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH guest-common 2/2] mapping: pci: optionally return the config in 'find_on_current_node'

2024-03-18 Thread Dominik Csapak
this is useful to get to the config without having to parse it again

Signed-off-by: Dominik Csapak 
---
 src/PVE/Mapping/PCI.pm | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index 0866175..9d8a4a7 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -219,10 +219,11 @@ sub find_on_current_node {
 sub get_node_mapping {
 my ($cfg, $id, $nodename) = @_;
 
-return undef if !defined($cfg->{ids}->{$id});
+my $map_config = $cfg->{ids}->{$id};
+return undef if !defined($map_config);
 
 my $res = [];
-for my $map ($cfg->{ids}->{$id}->{map}->@*) {
+for my $map ($map_config->{map}->@*) {
my $entry = eval { parse_property_string($map_fmt, $map) };
warn $@ if $@;
if ($entry && $entry->{node} eq $nodename) {
@@ -230,7 +231,7 @@ sub get_node_mapping {
}
 }
 
-return $res;
+return wantarray ? ($res, $map_config) : $res;
 }
 
 PVE::Mapping::PCI->register();
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH guest-common 1/2] mapping: pci: add 'live-migration-capable' flag to mappings

2024-03-18 Thread Dominik Csapak
so that we can decide in qemu-server to allow live-migration.
the driver and qemu must be capable of that, and it's the
admins responsibility to know and configure that

Mark the option as experimental in the description.

Signed-off-by: Dominik Csapak 
---
 src/PVE/Mapping/PCI.pm | 9 +
 1 file changed, 9 insertions(+)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index 19ace98..0866175 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -100,8 +100,16 @@ my $defaultData = {
maxLength => 4096,
},
mdev => {
+   description => "Marks the device(s) as being capable of providing 
mediated devices.",
type => 'boolean',
optional => 1,
+   default => 0,
+   },
+   'live-migration-capable' => {
+   description => "Marks the device(s) as being able to be 
live-migrated (Experimental).",
+   type => 'boolean',
+   optional => 1,
+   default => 0,
},
map => {
type => 'array',
@@ -123,6 +131,7 @@ sub options {
 return {
description => { optional => 1 },
mdev => { optional => 1 },
+   'live-migration-capable' => { optional => 1 },
map => {},
 };
 }
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH guest-common/qemu-server/manager/docs] enable experimental support for pci live migration

2024-03-18 Thread Dominik Csapak
for mapped resources. This requires driver and hardware support,
but aside from nvidia vgpus there don't seem to be many drivers
(if any) that do support that.

qemu already supports that for vfio-pci devices, so nothing to be
done there besides actively enabling it.

Since we currently can't properly test it here and very much depends on
hardware/driver support, mark it as experimental everywhere (docs/api/gui).
(though i tested the live-migration part manually here by using
"exec:cat > /tmp/test" for the migration target, and "exec: cat
/tmp/test" as the 'incoming' parameter for a new vm start, which worked ;) )

i opted for marking them migratable at the mapping level, but we could
theoretically also put it in the hostpciX config instead.
(though imho it fits better in the cluster-wide resource mapping config)

also the naming/texts could probably be improved, but i think
'live-migration-capable' is very descriptive and i didn't want to
use an overly short name for it (which can be confusing, see the
'shared' flag for storages)

note that patch 1 of qemu-server is only a cosmetic fix that i
encountered while testing and can be applied independently

pve-guest-common:

Dominik Csapak (2):
  mapping: pci: add 'live-migration-capable' flag to mappings
  mapping: pci: optionally return the config in 'find_on_current_node'

 src/PVE/Mapping/PCI.pm | 16 +---
 1 file changed, 13 insertions(+), 3 deletions(-)

qemu-server:

Dominik Csapak (4):
  usb: fix undef error on string match
  pci: set 'enable-migration' to on for live-migration marked mapped
devices
  check_local_resources: add more info per mapped device
  api: enable live migration for marked mapped pci devices

 PVE/API2/Qemu.pm  |  7 ++-
 PVE/QemuMigrate.pm| 13 +
 PVE/QemuServer.pm | 12 +++-
 PVE/QemuServer/PCI.pm |  8 +++-
 4 files changed, 29 insertions(+), 11 deletions(-)

pve-docs:

Dominik Csapak (2):
  qm: resource mapping: add description for `mdev` option
  qm: resource mapping: document `live-migration-capable` setting

 qm.adoc | 19 +++
 1 file changed, 19 insertions(+)

pve-manager:

Dominik Csapak (1):
  ui: allow configuring and live migration of mapped pci resources

 www/manager6/window/Migrate.js| 22 +++---
 www/manager6/window/PCIMapEdit.js | 12 
 2 files changed, 31 insertions(+), 3 deletions(-)

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 3/4] check_local_resources: add more info per mapped device

2024-03-18 Thread Dominik Csapak
such as the mapping name and if it's marked for live-migration (pci only)

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Qemu.pm   |  2 +-
 PVE/QemuMigrate.pm |  5 +++--
 PVE/QemuServer.pm  | 10 ++
 3 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 497987ff..4ecaeb91 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4516,7 +4516,7 @@ __PACKAGE__->register_method({
$res->{local_disks} = [ values %$local_disks ];;
 
$res->{local_resources} = $local_resources;
-   $res->{'mapped-resources'} = $mapped_resources;
+   $res->{'mapped-resources'} =  [ map { "$_->{key}" } 
$mapped_resources->@* ];
 
return $res;
 
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 8d9b35ae..6fe8157d 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -232,7 +232,7 @@ sub prepare {
 my ($loc_res, $mapped_res, $missing_mappings_by_node) = 
PVE::QemuServer::check_local_resources($conf, 1);
 my $blocking_resources = [];
 for my $res ($loc_res->@*) {
-   if (!grep($res, $mapped_res->@*)) {
+   if (!grep { $_->{key} eq $res } $mapped_res->@*) {
push $blocking_resources->@*, $res;
}
 }
@@ -246,8 +246,9 @@ sub prepare {
 
 if (scalar($mapped_res->@*)) {
my $missing_mappings = $missing_mappings_by_node->{$self->{node}};
+   my $mapped_text = join(", ", map { $_->{key} } $mapped_res->@*);
if ($running) {
-   die "can't migrate running VM which uses mapped devices: " . 
join(", ", $mapped_res->@*) . "\n";
+   die "can't migrate running VM which uses mapped devices: " . 
$mapped_text . "\n";
} elsif (scalar($missing_mappings->@*)) {
die "can't migrate to '$self->{node}': missing mapped devices " . 
join(", ", $missing_mappings->@*) . "\n";
} else {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6e2c8052..ef3aee20 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2600,14 +2600,16 @@ sub check_local_resources {
next if $entry->{host} && $entry->{host} =~ m/^spice$/i;
if ($entry->{mapping}) {
$add_missing_mapping->('usb', $k, $entry->{mapping});
-   push @$mapped_res, $k;
+   push @$mapped_res, { key => $k, device => $entry->{mapping} };
}
}
if ($k =~ m/^hostpci/) {
my $entry = parse_property_string('pve-qm-hostpci', $conf->{$k});
-   if ($entry->{mapping}) {
-   $add_missing_mapping->('pci', $k, $entry->{mapping});
-   push @$mapped_res, $k;
+   if (my $name = $entry->{mapping}) {
+   $add_missing_mapping->('pci', $k, $name);
+   my $mapped_device = { key => $k, device => $name };
+   $mapped_device->{'live-migration'} = 1 if 
$pci_map->{ids}->{$name}->{'live-migration-capable'};
+   push @$mapped_res, $mapped_device;
}
}
# sockets are safe: they will recreated be on the target side 
post-migrate
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-network 0/8] SDN Vnet blackbox testing

2024-03-18 Thread Max Carrara
On Wed Jan 3, 2024 at 4:37 PM CET, Stefan Lendl wrote:
> Add several tests for Vnets. State setup as well as testing results is
> done only via the API to test on the API boundaries not not against the
> internal state. Internal state is mocked to avoid requiring access to
> system files or pmxcfs.
>
> Tests validate the events of a nic joining a Vnet or a nic staring on a vnet
> with different subnet configurations.
> Further descriptions in the commit.

I really like this! I'm always a fan of more testing being done.

There are some things I mentioned in patch 8, but I overall like this
series a lot.

>
> Stefan Lendl (8):
>   refactor(sdn): extract cfs_read_file(datacenter.cfg)
>   refactor(dnsmasq): extract systemctl_service function
>   refactor(dnsmasq): extract ethers_file function
>   refactor(dnsmasq): extract update_lease function
>   refactor(controllers): extract read_etc_network_interfaces
>   refactor(evpn): extract read_local_frr_config
>   refactor(api): extract create_etc_interfaces_sdn_dir

The naming here could be a little different though; I think you can just
skip the `refactor()` part, we don't really use that anywhere AFAIK.

>   test(vnets): add test_vnets_blackbox

Also, for this message you could just use "test: vnets: ..." instead.

See also: 
https://pve.proxmox.com/wiki/Developer_Documentation#Commits_and_Commit_Messages

>
>  src/PVE/API2/Network/SDN/Zones.pm |   6 +-
>  src/PVE/Network/SDN/Controllers.pm|  16 +-
>  src/PVE/Network/SDN/Controllers/EvpnPlugin.pm |  10 +-
>  src/PVE/Network/SDN/Dhcp/Dnsmasq.pm   |  47 +-
>  src/PVE/Network/SDN/Zones/EvpnPlugin.pm   |   3 +-
>  src/PVE/Network/SDN/Zones/Plugin.pm   |   5 +
>  src/PVE/Network/SDN/Zones/SimplePlugin.pm |   2 +-
>  src/test/Makefile |   5 +-
>  src/test/run_test_vnets_blackbox.pl   | 797 ++
>  9 files changed, 863 insertions(+), 28 deletions(-)
>  create mode 100755 src/test/run_test_vnets_blackbox.pl



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-network 8/8] test(vnets): add test_vnets_blackbox

2024-03-18 Thread Max Carrara
On Wed Jan 3, 2024 at 4:37 PM CET, Stefan Lendl wrote:
> Add several tests for Vnets. State setup as well as testing results is
> done only via the API to test on the API boundaries not not against the
> internal state. Internal state is mocked to avoid requiring access to
> system files or pmxcfs.
>
> Mocking is done by reading and writing to a hash that holds the entire
> state of SDN. The state is reset after every test run.
>
> Testing is done via helper functions: nic_join and nic_start.
> When a nic joins a Vnet, currently it always - and only - calls
> add_next_free_cidr(). The same is true if a nic starts on Vnet, which
> only calles add_dhcp_mapping.
>
> These test functions homogenize the parameter list in contrast to the
> current calls to the current functions.  These functions should move to
> Vnets.pm to be called from QemuServer and LXC!
>
> The run_test function takes a function pointer and passes the rest of
> the arguments to the test functions after resetting the test state.
> This allows fine-grained parameterization per-test directly in the code
> instead of separated files that require the entire state to be passed
> in.
>
> The tests setup the SDN by creating a simple zone and a simple vnet. The
> nic_join and nic_start function is called with different subnet
> configuration wiht and without a dhcp-range configured and with or
> without an already present IP in the IPAM.

I really like where this is going! Now that I've read through this
patch, it's become clear why you factored so many calls to commands etc.
into their own `sub`s.

Since you mentioned that this is more of an RFC off-list, I get why
there are a bunch of lines that are commented out at the moment. Those
obviously shouldn't be committed later on.

>
> Several of the tests fail and uncovers bugs, that shall be fixed in
> subsequent commits.

Would be nice to perhaps also have those in the final series though ;)

Another thing that stood out to me is that some cases could be
declarative, e.g. the cases for `test_nic_join` and `test_nic_start`
could be declared in an array for each. You could then just loop over
the cases - that makes it easier to `plan` those cases later on.

That being said, you could perhaps structure the whole script so that
you call a `sub` named e.g. `setup` where you - well - set up all the
required logic and perform checks for the necessary pre-conditions, then
another `sub` that runs the tests (and optionally one that cleans things
up if necessary).

Though, please note that this is not a strict necessity from my side,
just something I wanted to mention! I like the way you've written your
tests a lot, it's just that I personally tend to prefer a more
declarative approach. So, it's okay if you just leave the structure as
it is right now, if you prefer it that way.

There are some more comments inline that give a little more context
regarding the above, but otherwise, LGTM - pretty good to see more
testing to be done!

>
> Signed-off-by: Stefan Lendl 
> ---
>  src/test/Makefile   |   5 +-
>  src/test/run_test_vnets_blackbox.pl | 797 
>  2 files changed, 801 insertions(+), 1 deletion(-)
>  create mode 100755 src/test/run_test_vnets_blackbox.pl
>
> diff --git a/src/test/Makefile b/src/test/Makefile
> index eb59d5f..5a937a4 100644
> --- a/src/test/Makefile
> +++ b/src/test/Makefile
> @@ -1,6 +1,6 @@
>  all: test
>  
> -test: test_zones test_ipams test_dns test_subnets
> +test: test_zones test_ipams test_dns test_subnets test_vnets_blackbox
>  
>  test_zones: run_test_zones.pl
>   ./run_test_zones.pl
> @@ -14,4 +14,7 @@ test_dns: run_test_dns.pl
>  test_subnets: run_test_subnets.pl
>   ./run_test_subnets.pl
>  
> +test_vnets_blackbox: run_test_vnets_blackbox.pl
> + ./run_test_vnets_blackbox.pl
> +
>  clean:
> diff --git a/src/test/run_test_vnets_blackbox.pl 
> b/src/test/run_test_vnets_blackbox.pl
> new file mode 100755
> index 000..2fd5bd7
> --- /dev/null
> +++ b/src/test/run_test_vnets_blackbox.pl
> @@ -0,0 +1,797 @@
> +#!/usr/bin/perl
> +
> +use strict;
> +use warnings;
> +
> +use lib qw(..);
> +use File::Slurp;
> +use List::Util qw(first all);
> +use NetAddr::IP qw(:lower);
> +
> +use Test::More;
> +use Test::MockModule;
> +
> +use PVE::Tools qw(extract_param file_set_contents);
> +
> +use PVE::Network::SDN;
> +use PVE::Network::SDN::Zones;
> +use PVE::Network::SDN::Zones::Plugin;
> +use PVE::Network::SDN::Controllers;
> +use PVE::Network::SDN::Dns;
> +use PVE::Network::SDN::Vnets;
> +
> +use PVE::RESTEnvironment;
> +
> +use PVE::API2::Network::SDN::Zones;
> +use PVE::API2::Network::SDN::Subnets;
> +use PVE::API2::Network::SDN::Vnets;
> +use PVE::API2::Network::SDN::Ipams;
> +
> +use Data::Dumper qw(Dumper);
> +$Data::Dumper::Sortkeys = 1;
> +$Data::Dumper::Indent = 1;
> +
> +my $TMP_ETHERS_FILE = "/tmp/ethers";
> +
> +my $test_state = undef;
> +sub clear_test_state {
> +$test_state = {
> + locks => {},
> + datacenter_config =

[pve-devel] [RFC PATCH widget-toolkit] utils: API2Request: defer masking after layout

2024-03-18 Thread Dominik Csapak
since some time (not sure when exactly), the 'load()' method of the edit
window did not correctly mask the window anymore

the reason seems to be that the API2Request tries to mask the component
before it's rendered, and that did never work correctly judging from the
existing comment.

Instead of simply calling `setLoading`, test if the component is
rendered, and if not, mask it after it has finished it's layout.

Since we cannot guarantee that the 'afterlayout' event is triggered
before the api call response handler, add a unique id marker to the
waitMsgTarget that is delted when the loading is done, and only trigger
the masking if this marker is still there. (thankfully javascript is
single threaded so this should not end up being a data race)

Signed-off-by: Dominik Csapak 
---
sending as RFC because i'm unsure if we accidentally broke the masking
somewhere along the way. AFAICS from the current code, this never could have
worked properly? anyway, i'll be looking into that sometimes soon, and
this patch should be correct anyway...

 src/Utils.js | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/src/Utils.js b/src/Utils.js
index ff7c1a7..b5e12f3 100644
--- a/src/Utils.js
+++ b/src/Utils.js
@@ -458,6 +458,7 @@ utilities: {
newopts.url = '/api2/extjs' + newopts.url;
}
delete newopts.callback;
+   let loadingId = `loading${++Ext.idSeed}`;
 
let createWrapper = function(successFn, callbackFn, failureFn) {
Ext.apply(newopts, {
@@ -467,6 +468,7 @@ utilities: {
options.waitMsgTarget.setMasked(false);
} else {
options.waitMsgTarget.setLoading(false);
+   delete options.waitMsgTarget[loadingId];
}
}
let result = Ext.decode(response.responseText);
@@ -489,6 +491,7 @@ utilities: {
options.waitMsgTarget.setMasked(false);
} else {
options.waitMsgTarget.setLoading(false);
+   delete options.waitMsgTarget[loadingId];
}
}
response.result = {};
@@ -518,9 +521,15 @@ utilities: {
if (target) {
if (Proxmox.Utils.toolkit === 'touch') {
target.setMasked({ xtype: 'loadmask', message: newopts.waitMsg 
});
-   } else {
-   // Note: ExtJS bug - this does not work when component is not 
rendered
+   } else if (target.rendered) {
target.setLoading(newopts.waitMsg);
+   } else {
+   target[loadingId] = true;
+   target.on('afterlayout', function() {
+   if (target[loadingId]) {
+   target.setLoading(newopts.waitMsg);
+   }
+   }, target, { single: true });
}
}
Ext.Ajax.request(newopts);
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-network 8/8] test(vnets): add test_vnets_blackbox

2024-03-18 Thread Stefan Lendl
"Max Carrara"  writes:

> On Wed Jan 3, 2024 at 4:37 PM CET, Stefan Lendl wrote:
>> Add several tests for Vnets. State setup as well as testing results is
>> done only via the API to test on the API boundaries not not against the
>> internal state. Internal state is mocked to avoid requiring access to
>> system files or pmxcfs.
>>
>> Mocking is done by reading and writing to a hash that holds the entire
>> state of SDN. The state is reset after every test run.
>>
>> Testing is done via helper functions: nic_join and nic_start.
>> When a nic joins a Vnet, currently it always - and only - calls
>> add_next_free_cidr(). The same is true if a nic starts on Vnet, which
>> only calles add_dhcp_mapping.
>>
>> These test functions homogenize the parameter list in contrast to the
>> current calls to the current functions.  These functions should move to
>> Vnets.pm to be called from QemuServer and LXC!
>>
>> The run_test function takes a function pointer and passes the rest of
>> the arguments to the test functions after resetting the test state.
>> This allows fine-grained parameterization per-test directly in the code
>> instead of separated files that require the entire state to be passed
>> in.
>>
>> The tests setup the SDN by creating a simple zone and a simple vnet. The
>> nic_join and nic_start function is called with different subnet
>> configuration wiht and without a dhcp-range configured and with or
>> without an already present IP in the IPAM.
>
> I really like where this is going! Now that I've read through this
> patch, it's become clear why you factored so many calls to commands etc.
> into their own `sub`s.
>
> Since you mentioned that this is more of an RFC off-list, I get why
> there are a bunch of lines that are commented out at the moment. Those
> obviously shouldn't be committed later on.
>
>>
>> Several of the tests fail and uncovers bugs, that shall be fixed in
>> subsequent commits.
>
> Would be nice to perhaps also have those in the final series though ;)

Yes agreed will uncomment them in a follow-up.

>
> Another thing that stood out to me is that some cases could be
> declarative, e.g. the cases for `test_nic_join` and `test_nic_start`
> could be declared in an array for each. You could then just loop over
> the cases - that makes it easier to `plan` those cases later on.
>

I totally agree it would be nice to have it like that.
I tried to get it there but found unrolling the calls to be more
readable and making the test sub body simpler not requiring to loop in
the test or a setup sub.

If this approach would be refactored further with some Perl-magic™ this
would be nice but I chose this deliberatly for simplicity and readability.

> That being said, you could perhaps structure the whole script so that
> you call a `sub` named e.g. `setup` where you - well - set up all the
> required logic and perform checks for the necessary pre-conditions, then
> another `sub` that runs the tests (and optionally one that cleans things
> up if necessary).

Yes, agreed as well. Also tried that but chose a "simpler" version for
the first iteration.

I found that it is sometimes simpler to have dedicated test functions
for example if you have a dhcp-range instead of if-casing whether a a
property is present in the hash.

I will re-consider a dedicated setup sub for a follow-up.

>
> Though, please note that this is not a strict necessity from my side,
> just something I wanted to mention! I like the way you've written your
> tests a lot, it's just that I personally tend to prefer a more
> declarative approach. So, it's okay if you just leave the structure as
> it is right now, if you prefer it that way.
>
> There are some more comments inline that give a little more context
> regarding the above, but otherwise, LGTM - pretty good to see more
> testing to be done!
>

Thanks for the review. 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-network 8/8] test(vnets): add test_vnets_blackbox

2024-03-18 Thread Max Carrara
On Mon Mar 18, 2024 at 3:14 PM CET, Stefan Lendl wrote:
> "Max Carrara"  writes:
>
> > On Wed Jan 3, 2024 at 4:37 PM CET, Stefan Lendl wrote:
> >> Add several tests for Vnets. State setup as well as testing results is
> >> done only via the API to test on the API boundaries not not against the
> >> internal state. Internal state is mocked to avoid requiring access to
> >> system files or pmxcfs.
> >>
> >> Mocking is done by reading and writing to a hash that holds the entire
> >> state of SDN. The state is reset after every test run.
> >>
> >> Testing is done via helper functions: nic_join and nic_start.
> >> When a nic joins a Vnet, currently it always - and only - calls
> >> add_next_free_cidr(). The same is true if a nic starts on Vnet, which
> >> only calles add_dhcp_mapping.
> >>
> >> These test functions homogenize the parameter list in contrast to the
> >> current calls to the current functions.  These functions should move to
> >> Vnets.pm to be called from QemuServer and LXC!
> >>
> >> The run_test function takes a function pointer and passes the rest of
> >> the arguments to the test functions after resetting the test state.
> >> This allows fine-grained parameterization per-test directly in the code
> >> instead of separated files that require the entire state to be passed
> >> in.
> >>
> >> The tests setup the SDN by creating a simple zone and a simple vnet. The
> >> nic_join and nic_start function is called with different subnet
> >> configuration wiht and without a dhcp-range configured and with or
> >> without an already present IP in the IPAM.
> >
> > I really like where this is going! Now that I've read through this
> > patch, it's become clear why you factored so many calls to commands etc.
> > into their own `sub`s.
> >
> > Since you mentioned that this is more of an RFC off-list, I get why
> > there are a bunch of lines that are commented out at the moment. Those
> > obviously shouldn't be committed later on.
> >
> >>
> >> Several of the tests fail and uncovers bugs, that shall be fixed in
> >> subsequent commits.
> >
> > Would be nice to perhaps also have those in the final series though ;)
>
> Yes agreed will uncomment them in a follow-up.
>
> >
> > Another thing that stood out to me is that some cases could be
> > declarative, e.g. the cases for `test_nic_join` and `test_nic_start`
> > could be declared in an array for each. You could then just loop over
> > the cases - that makes it easier to `plan` those cases later on.
> >
>
> I totally agree it would be nice to have it like that.
> I tried to get it there but found unrolling the calls to be more
> readable and making the test sub body simpler not requiring to loop in
> the test or a setup sub.
>
> If this approach would be refactored further with some Perl-magic™ this
> would be nice but I chose this deliberatly for simplicity and readability.

Very fair! Then it's best to just keep it as it is, in that case.

>
> > That being said, you could perhaps structure the whole script so that
> > you call a `sub` named e.g. `setup` where you - well - set up all the
> > required logic and perform checks for the necessary pre-conditions, then
> > another `sub` that runs the tests (and optionally one that cleans things
> > up if necessary).
>
> Yes, agreed as well. Also tried that but chose a "simpler" version for
> the first iteration.
>
> I found that it is sometimes simpler to have dedicated test functions
> for example if you have a dhcp-range instead of if-casing whether a a
> property is present in the hash.

I agree completely - it was more of an idea / nice-to-have. FWIW, if you
want to go down that route, you can of course also save references to
the `sub`s being run in the array of cases. Alternatively, it's also
totally fine to have multiple arrays of cases for each specific
function.

That being said, if you just want to keep it as it is, that's also
totally fine by me. Should the file grow, such a small refactor most
likely won't cost that much time.

>
> I will re-consider a dedicated setup sub for a follow-up.
>
> >
> > Though, please note that this is not a strict necessity from my side,
> > just something I wanted to mention! I like the way you've written your
> > tests a lot, it's just that I personally tend to prefer a more
> > declarative approach. So, it's okay if you just leave the structure as
> > it is right now, if you prefer it that way.
> >
> > There are some more comments inline that give a little more context
> > regarding the above, but otherwise, LGTM - pretty good to see more
> > testing to be done!
> >
>
> Thanks for the review. 

You're welcome!



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC PATCH widget-toolkit] utils: API2Request: defer masking after layout

2024-03-18 Thread Thomas Lamprecht
On 18/03/2024 14:44, Dominik Csapak wrote:
> since some time (not sure when exactly), the 'load()' method of the edit
> window did not correctly mask the window anymore
> 
> the reason seems to be that the API2Request tries to mask the component
> before it's rendered, and that did never work correctly judging from the
> existing comment.
> 
> Instead of simply calling `setLoading`, test if the component is
> rendered, and if not, mask it after it has finished it's layout.
> 
> Since we cannot guarantee that the 'afterlayout' event is triggered
> before the api call response handler, add a unique id marker to the
> waitMsgTarget that is delted when the loading is done, and only trigger

s/delted/deleted/

And why do we need setting a unique ID here and not just a flag?
Can a second load be triggered before the first one finished?

> the masking if this marker is still there. (thankfully javascript is
> single threaded so this should not end up being a data race)

Note that async could cause data races also in single-threaded
code, but as we do not use that here and no yield point exist
that doesn't matter here – just mentioning it because the statement
would suggest that one could not have code that is susceptible to
such a race at all in JavaScript, which is not true.

> 
> Signed-off-by: Dominik Csapak 
> ---
> sending as RFC because i'm unsure if we accidentally broke the masking
> somewhere along the way. AFAICS from the current code, this never could have
> worked properly? anyway, i'll be looking into that sometimes soon, and
> this patch should be correct anyway...

it surely did sometimes in the past, maybe ExtJS 7?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH installer] unconfigured: move terminal size setting before starting debug shell

2024-03-18 Thread Thomas Lamprecht
On 12/03/2024 12:59, Christoph Heiss wrote:
> Otherwise, when using the serial debug shell, the console size will be
> 0x0. This in turn breaks the TUI installer, as it cannot detect the size
> properly.

That's ok, but...

> 
> It also adjust the size to the proper 80x24 instead of 80x25, as
> advertised in the log message.

... why not going the opposite direction and change the log message,
or is there some reason to go for 24 rows?

Isn't 80x25 the standard VGA one?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer] unconfigured: move terminal size setting before starting debug shell

2024-03-18 Thread Christoph Heiss
On Mon, Mar 18, 2024 at 04:55:20PM +0100, Thomas Lamprecht wrote:
> On 12/03/2024 12:59, Christoph Heiss wrote:
> > Otherwise, when using the serial debug shell, the console size will be
> > 0x0. This in turn breaks the TUI installer, as it cannot detect the size
> > properly.
>
> That's ok, but...
>
> >
> > It also adjust the size to the proper 80x24 instead of 80x25, as
> > advertised in the log message.
>
> ... why not going the opposite direction and change the log message,
> or is there some reason to go for 24 rows?
>
> Isn't 80x25 the standard VGA one?

For VGA 80x25 is the standard size [0], yes.

But for serial consoles aka. VT10x emulated terminals it is actually
80x24 [1], which basically everything uses as reference. GRUB also uses
80x24 when used over a serial interface. It spits out 24 lines at least.

Probably should have noted that in the commit messages, sorry.

[0] https://www.kernel.org/doc/Documentation/svga.txt (`NORMAL_VGA`)
[1] https://vt100.net/dec/ek-vt100-tm-002.pdf (page 21, "Format")


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel