[pve-devel] [PATCH manager v3 6/6] fix #3893: ui: network: enable bridge_vids field

2024-07-03 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
changes since v2: none

 www/manager6/node/Config.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/manager6/node/Config.js b/www/manager6/node/Config.js
index d27592ce..7bdfb6d9 100644
--- a/www/manager6/node/Config.js
+++ b/www/manager6/node/Config.js
@@ -194,6 +194,7 @@ Ext.define('PVE.node.Config', {
showApplyBtn: true,
groups: ['services'],
nodename: nodename,
+   bridge_set_vids: true,
onlineHelp: 'sysadmin_network_configuration',
},
{
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH widget-toolkit v3] fix #3892: Network: add bridge vids field for bridge_vids

2024-07-03 Thread Aaron Lauterer
The new optional bridge_vids field allows to set that property via the
GUI. Since the backend needs to support it, the field needs to be
explicitly enabled.

For now, Proxmox VE (PVE) is the use case.

Signed-off-by: Aaron Lauterer 
---
changes since v2:
* added validation code following how it is implemented in the API

 src/node/NetworkEdit.js | 62 +
 src/node/NetworkView.js |  5 
 2 files changed, 67 insertions(+)

diff --git a/src/node/NetworkEdit.js b/src/node/NetworkEdit.js
index 27c1baf..8c1b135 100644
--- a/src/node/NetworkEdit.js
+++ b/src/node/NetworkEdit.js
@@ -2,6 +2,9 @@ Ext.define('Proxmox.node.NetworkEdit', {
 extend: 'Proxmox.window.Edit',
 alias: ['widget.proxmoxNodeNetworkEdit'],
 
+// Enable to show the VLAN ID field
+bridge_set_vids: false,
+
 initComponent: function() {
let me = this;
 
@@ -57,11 +60,67 @@ Ext.define('Proxmox.node.NetworkEdit', {
}
 
if (me.iftype === 'bridge') {
+   let vids = Ext.create('Ext.form.field.Text', {
+   fieldLabel: gettext('Bridge VIDS'),
+   name: 'bridge_vids',
+   emptyText: '2-4094',
+   disabled: true,
+   autoEl: {
+   tag: 'div',
+   'data-qtip': gettext('Space-separated list of VLANs and 
ranges, for example: 2 4 100-200'),
+   },
+   validator: function(value) {
+   if (!value) { return true; } // Empty
+   let result = true;
+
+   let vid_list = value.split(' ');
+
+   let checkVid = function(tag) {
+   if (tag < 2 || tag > 4094) {
+   return `not a valid VLAN ID '${tag}'`;
+   }
+   return true;
+   };
+
+   for (const vid of vid_list) {
+   if (!vid) {
+   continue;
+   }
+   let res = vid.match(/^(\d+)([-](\d+))?$/i);
+   if (!res) {
+   return `not a valid VLAN configuration '${vid}'`;
+   }
+   let start = Number(res[1]);
+   let end = Number(res[3]);
+
+   if (start) {
+   result = checkVid(start);
+   if (result !== true) { return result; }
+   }
+   if (end) {
+   result = checkVid(end);
+   if (result !== true) { return result; }
+   }
+
+   if (start >= end) {
+   return `VID range must go from lower to higher tag: 
'${vid}'`;
+   }
+   }
+   return true;
+   },
+   });
column2.push({
xtype: 'proxmoxcheckbox',
fieldLabel: gettext('VLAN aware'),
name: 'bridge_vlan_aware',
deleteEmpty: !me.isCreate,
+   listeners: {
+   change: function(f, newVal) {
+   if (me.bridge_set_vids) {
+   vids.setDisabled(!newVal);
+   }
+   },
+   },
});
column2.push({
xtype: 'textfield',
@@ -72,6 +131,9 @@ Ext.define('Proxmox.node.NetworkEdit', {
'data-qtip': gettext('Space-separated list of interfaces, 
for example: enp0s0 enp1s0'),
},
});
+   if (me.bridge_set_vids) {
+   advancedColumn2.push(vids);
+   }
} else if (me.iftype === 'OVSBridge') {
column2.push({
xtype: 'textfield',
diff --git a/src/node/NetworkView.js b/src/node/NetworkView.js
index 1d67ac8..c08cbfa 100644
--- a/src/node/NetworkView.js
+++ b/src/node/NetworkView.js
@@ -33,6 +33,9 @@ Ext.define('Proxmox.node.NetworkView', {
 
 showApplyBtn: false,
 
+// decides if VLAN IDs field for bridges is shown, depends on the product 
if needed
+bridge_set_vids: false,
+
 initComponent: function() {
let me = this;
 
@@ -100,6 +103,7 @@ Ext.define('Proxmox.node.NetworkView', {
nodename: me.nodename,
iface: rec.data.iface,
iftype: rec.data.type,
+   bridge_set_vids: me.bridge_set_vids,
listeners: {
destroy: () => reload(),
},
@@ -170,6 +174,7 @@ Ext.define('Proxmox.node.NetworkView', {
nodename: me.nodename,
iftype: iType,
iface_default: findNextFreeInterfaceId(iDefault ?? iType),
+   bridge_set_vids: me.bridge_set_vids,
onlineHelp: 'sysadmin_network_configuration',

[pve-devel] [PATCH common v3 2/3] fix #3893: network: add vlan id and range parameter definitions

2024-07-03 Thread Aaron Lauterer
This is one step to make it possible to define the VLAN IDs and ranges
for bridges.

It is expected to be used in combination with the `-list` magic
property. Therefore it defines and checks the validity of a single list
item that could just be a single VLAN tag ID or a range.

Signed-off-by: Aaron Lauterer 
---
changes since v2:
* renamed schema to a more generic one with the use case of the guest
  trunk config in mind

 src/PVE/JSONSchema.pm | 34 ++
 1 file changed, 34 insertions(+)

diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index 115f811..22fe048 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -81,6 +81,12 @@ register_standard_option('pve-iface', {
 minLength => 2, maxLength => 20,
 });
 
+register_standard_option('pve-vlan-id-or-range', {
+description => "VLAN ID or range.",
+type => 'string', format => 'pve-vlan-id-or-range',
+minLength => 1, maxLength => 9,
+});
+
 register_standard_option('pve-storage-id', {
 description => "The storage identifier.",
 type => 'string', format => 'pve-storage-id',
@@ -595,6 +601,34 @@ sub pve_verify_iface {
 return $id;
 }
 
+# vlan id (vids), single or range
+register_format('pve-vlan-id-or-range', \&pve_verify_vlan_id_or_range);
+sub pve_verify_vlan_id_or_range {
+my ($vlan, $noerr) = @_;
+
+my $check_vid = sub {
+   my $tag = shift;
+   if ( $tag < 2 || $tag > 4094) {
+   return undef if $noerr;
+   die "invalid VLAN tag '$tag'\n";
+   }
+};
+
+if ($vlan !~ m/^(\d+)([-](\d+))?$/i) {
+   return undef if $noerr;
+   die "invalid VLAN configuration '$vlan'\n";
+}
+my $start = $1;
+my $end = $3;
+return  undef if (!defined $check_vid->($start));
+if ($end) {
+   return undef if (!defined $check_vid->($end));
+   die "VLAN range must go from lower to higher tag '$vlan'" if $start >= 
$end && !$noerr;
+}
+
+return $vlan;
+}
+
 # general addresses by name or IP
 register_format('address', \&pve_verify_address);
 sub pve_verify_address {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH common v3 1/3] tools: add check_list_empty function

2024-07-03 Thread Aaron Lauterer
In some situations we don't want a total empty list. I opted for a
dedicated function instead of integrating it as error in the
`split_list` function. It is used in many places and the potential
fallout from unintended behavior changes is too big.

Signed-off-by: Aaron Lauterer 
---
changes since v2:
* newly added

 src/PVE/Tools.pm | 8 
 1 file changed, 8 insertions(+)

diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
index 766c809..c8ac6f0 100644
--- a/src/PVE/Tools.pm
+++ b/src/PVE/Tools.pm
@@ -718,6 +718,14 @@ sub split_list {
 return @data;
 }
 
+sub check_list_empty {
+my ($list) = @_;
+if (scalar(PVE::Tools::split_list($list)) < 1) {
+   return 0;
+}
+return 1;
+}
+
 sub trim {
 my $txt = shift;
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH common v3 3/3] inotify: interfaces: make sure bridge_vids use space as separator

2024-07-03 Thread Aaron Lauterer
Because the API accepts multiple possible list separators we need to
make sure that we write the bridge_vids with space as separator, no
matter which separator was used when passing it to the API.

Signed-off-by: Aaron Lauterer 
---
changes since v2:
* added to make sure the format works as expected, no matter what
  delimiter might have been used in the API call

 src/PVE/INotify.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/INotify.pm b/src/PVE/INotify.pm
index 8a4a810..bf580c5 100644
--- a/src/PVE/INotify.pm
+++ b/src/PVE/INotify.pm
@@ -1315,7 +1315,7 @@ sub __interface_to_string {
 
if (defined($d->{bridge_vlan_aware})) {
$raw .= "\tbridge-vlan-aware yes\n";
-   my $vlans = defined($d->{bridge_vids}) ? $d->{bridge_vids} : 
"2-4094";
+   my $vlans = length($d->{bridge_vids}) ? join(" ", 
PVE::Tools::split_list($d->{bridge_vids})) : "2-4094";
$raw .= "\tbridge-vids $vlans\n";
}
$done->{bridge_vlan_aware} = 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH common, widget-toolkit, manager v3 0/3] fix #3893: make bridge vids configurable

2024-07-03 Thread Aaron Lauterer
this version reworks a few parts since v2.

* renamed format in JSONSchema to a more generic `pve-vlan-id-or-range`
* explicitly use spaces when writing interfaces file. This is one
  possible approach to deal with the fact, that the generic `-list`
  format will accept quite a few delimiters and we need spaces.
* code style improvements such as naming the regex results.
* add parameter verification to the web ui

With the changes to the JSONSchema we can then work on using it too for
the guest trunk option. This hasn't been started yet though.

common: Aaron Lauterer (3):
  tools: add check_list_empty function
  fix #3893: network: add vlan id and range parameter definitions
  inotify: interfaces: make sure bridge_vids use space as separator

 src/PVE/INotify.pm|  2 +-
 src/PVE/JSONSchema.pm | 34 ++
 src/PVE/Tools.pm  |  8 
 3 files changed, 43 insertions(+), 1 deletion(-)


widget-toolkit: Aaron Lauterer (1):
  fix #3892: Network: add bridge vids field for bridge_vids

 src/node/NetworkEdit.js | 62 +
 src/node/NetworkView.js |  5 
 2 files changed, 67 insertions(+)


manager: Aaron Lauterer (2):
  fix #3893: api: network: add bridge_vids parameter
  fix #3893: ui: network: enable bridge_vids field

 PVE/API2/Network.pm | 15 ++-
 www/manager6/node/Config.js |  1 +
 2 files changed, 15 insertions(+), 1 deletion(-)

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 5/6] fix #3893: api: network: add bridge_vids parameter

2024-07-03 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
changes since v2:
* added checks to handle empty lists

 PVE/API2/Network.pm | 15 ++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm
index f39f04f5..dd3855d1 100644
--- a/PVE/API2/Network.pm
+++ b/PVE/API2/Network.pm
@@ -66,6 +66,11 @@ my $confdesc = {
type => 'boolean',
optional => 1,
 },
+bridge_vids => {
+   description => "Specify the allowed vlans. For example: '2 4 100-200'. 
Only used if the bridge is VLAN aware.",
+   optional => 1,
+   type => 'string', format => 'pve-vlan-id-or-range-list',
+},
 bridge_ports => {
description => "Specify the interfaces you want to add to your bridge.",
optional => 1,
@@ -469,6 +474,10 @@ __PACKAGE__->register_method({
if ! grep { $_ eq $iface } @ports;
}
 
+   if ($param->{bridge_vids} && 
!PVE::Tools::check_list_empty($param->{bridge_vids})) {
+   raise_param_exc({ bridge_vids => "VLAN list items are empty" });
+   }
+
$ifaces->{$iface} = $param;
 
PVE::INotify::write_file('interfaces', $config);
@@ -558,7 +567,11 @@ __PACKAGE__->register_method({
foreach my $k (keys %$param) {
$ifaces->{$iface}->{$k} = $param->{$k};
}
-   
+
+   if ($param->{bridge_vids} && 
!PVE::Tools::check_list_empty($param->{bridge_vids})) {
+   raise_param_exc({ bridge_vids => "VLAN list items are empty" });
+   }
+
PVE::INotify::write_file('interfaces', $config);
};
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 common 1/4] fix #3893: network: make bridge vids configurable

2024-07-03 Thread Aaron Lauterer

new patch series (v3) is available
https://lists.proxmox.com/pipermail/pve-devel/2024-July/064388.html

On  2023-06-14  11:30, Aaron Lauterer wrote:

For that we need to add a new format option that checks against valid
VLAN tags and ranges, for example: 2 4 100-200

The check, if the default value should be used, needs to fail not just
when not defined, but also in case it is an empty string.

Signed-off-by: Aaron Lauterer 
---
no changes since v1.

I think replacing the 'defined' check with 'length' should be fine. We
need to also handle the situation that the parameter is defined, but an
empty string. There should be no autovivification happening. If I missed
a side effect, let me know.

For the new format option I went with singular for the name as it only
checks a single VLAN ID/range from the list, 'pve-bridge-vid', but I am
not sure if it wouldn't be better to call it the actual parameter name
'pve-bridge-vids'.

  src/PVE/INotify.pm|  2 +-
  src/PVE/JSONSchema.pm | 32 
  2 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/src/PVE/INotify.pm b/src/PVE/INotify.pm
index bc33a8f..14f40ac 100644
--- a/src/PVE/INotify.pm
+++ b/src/PVE/INotify.pm
@@ -1270,7 +1270,7 @@ sub __interface_to_string {
  
  	if (defined($d->{bridge_vlan_aware})) {

$raw .= "\tbridge-vlan-aware yes\n";
-   my $vlans = defined($d->{bridge_vids}) ? $d->{bridge_vids} : 
"2-4094";
+   my $vlans = length($d->{bridge_vids}) ? $d->{bridge_vids} : 
"2-4094";
$raw .= "\tbridge-vids $vlans\n";
}
$done->{bridge_vlan_aware} = 1;
diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index 85d47f2..1051a45 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -78,6 +78,12 @@ register_standard_option('pve-iface', {
  minLength => 2, maxLength => 20,
  });
  
+register_standard_option('pve-bridge-vid', {

+description => "Bridge VLAN ID.",
+type => 'string', format => 'pve-bridge-vid',
+minLength => 1, maxLength => 9,
+});
+
  register_standard_option('pve-storage-id', {
  description => "The storage identifier.",
  type => 'string', format => 'pve-storage-id',
@@ -588,6 +594,32 @@ sub pve_verify_iface {
  return $id;
  }
  
+# bridge vlan id (vids)

+register_format('pve-bridge-vid', \&pve_verify_bridge_vid);
+sub pve_verify_bridge_vid {
+my ($vlan, $noerr) = @_;
+
+my $check_vid = sub {
+   my $id = shift;
+   if ( $id < 2 || $id > 4094) {
+   return undef if $noerr;
+   die "invalid VLAN tag '$id'\n";
+   }
+};
+
+if ($vlan !~ m/^(\d+)([-](\d+))?$/i) {
+   return undef if $noerr;
+   die "invalid VLAN configuration '$vlan'\n";
+}
+$check_vid->($1);
+if ($3) {
+   $check_vid->($3);
+   die "VLAN range must go from lower to higher tag '$vlan'" if $1 > $3 && 
!$noerr;
+}
+
+return $vlan;
+}
+
  # general addresses by name or IP
  register_format('address', \&pve_verify_address);
  sub pve_verify_address {



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-common] tools: fix syscall mknod()

2024-07-03 Thread Wolfgang Bumiller
applied, thanks

Interesting how this seems to break on other architectures...

On Fri, Jun 07, 2024 at 06:33:13PM GMT, Jing Luo via pve-devel wrote:
> From: Jing Luo 
> To: pve-devel@lists.proxmox.com
> Cc: Jing Luo 
> Subject: [PATCH pve-common] tools: fix syscall mknod()
> Date: Fri,  7 Jun 2024 18:33:13 +0900
> Message-ID: <20240607093933.3882526-2-jing@jing.rocks>
> X-Mailer: git-send-email 2.45.2
> 
> b792e8df81 introduced a bug that can cause this:
> 
> Undefined subroutine &PVE::Syscall::SYS_mknod called at 
> /usr/share/perl5/PVE/Syscall.pm line 11
> 
> It should be mknod, not SYS_mknod. This caused other pve perl lib failing
> to build. I couldn't reproduce this on amd64 build, but I could reproduce this
> on arm64 build; however this didn't seem to fix the issue, unless I revert
> b792e8df81.
> 
> cf: b792e8df81d70cc8fc4bc7d0655313d4a7f40c3d
> Signed-off-by: Jing Luo 
> ---
>  src/PVE/Tools.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
> index 766c809..c2906de 100644
> --- a/src/PVE/Tools.pm
> +++ b/src/PVE/Tools.pm
> @@ -1753,7 +1753,7 @@ sub mkdirat($$$) {
>  
>  sub mknod($$$) {
>  my ($filename, $mode, $dev) = @_;
> -return syscall(PVE::Syscall::SYS_mknod, $filename, int($mode), 
> int($dev)) == 0;
> +return syscall(PVE::Syscall::mknod, $filename, int($mode), int($dev)) == 
> 0;
>  }
>  
>  sub fchownat($) {
> -- 
> 2.45.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server] fix #5572: avoid warning about uninitialized value when cloning cloudinit disk

2024-07-03 Thread Fiona Ebner
Some callers like the move disk API endpoint do not pass an explicit
completion argument. This is not an issue in general, because
qemu_drive_mirror_monitor() defaults to 'complete'. However, there was
a string comparision for the cloudinit case that can trigger a warning
about the value being uninitialized.

Signed-off-by: Fiona Ebner 
---
 PVE/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9fac4a47..cf348cac 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -8280,7 +8280,7 @@ sub clone_disk {
# when cloning multiple disks (e.g. during clone_vm) it might be 
the last disk
# if this is the case, we have to complete any block-jobs still 
there from
# previous drive-mirrors
-   if (($completion eq 'complete') && (scalar(keys %$jobs) > 0)) {
+   if (($completion && $completion eq 'complete') && (scalar(keys 
%$jobs) > 0)) {
qemu_drive_mirror_monitor($vmid, $newvmid, $jobs, $completion, 
$qga);
}
goto no_data_clone;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v2 qemu] fix #4726: avoid superfluous check in vma code

2024-07-03 Thread Fiona Ebner
Am 14.06.24 um 13:00 schrieb Fiona Ebner:
> The 'status' pointer is dereferenced regardless of the NULL check,
> i.e. 'status->closed' is accessed after the branch with the check.
> Since all callers pass in the address of a struct on the stack, the
> pointer can never be NULL. Remove the superfluous check and add an
> assert instead.
> 
> Signed-off-by: Fiona Ebner 

applied with Fabian's R-b given for v1, because it's the very same
change, but v2 has a slightly improved commit message.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox-firewall 2/3] conntrack: arp: move handling to guest chains

2024-07-03 Thread Stefan Hanreich
In order to make sure we are only affecting VM traffic and no host
interfaces that are bridged, move the rules into a chain that gets
executed inside the guest chain, rather than setting the rules
globally.

Since ether type matches on the respective Ethernet header, it
doesn't work for packets with VLAN header. Matching via meta protocol
ensures that VLAN encapsulated ARP packets are matched as well.
Otherwise ARP traffic inside VLANs gets dropped, due to them having
conntrack state invalid.

Signed-off-by: Stefan Hanreich 
---
 .../resources/proxmox-firewall.nft| 16 -
 proxmox-firewall/src/firewall.rs  | 11 +++-
 .../integration_tests__firewall.snap  | 64 +++
 3 files changed, 87 insertions(+), 4 deletions(-)

diff --git a/proxmox-firewall/resources/proxmox-firewall.nft 
b/proxmox-firewall/resources/proxmox-firewall.nft
index 537ba88..968fb93 100644
--- a/proxmox-firewall/resources/proxmox-firewall.nft
+++ b/proxmox-firewall/resources/proxmox-firewall.nft
@@ -33,7 +33,9 @@ add chain bridge proxmox-firewall-guests block-ndp-out
 add chain bridge proxmox-firewall-guests allow-ra-out
 add chain bridge proxmox-firewall-guests block-ra-out
 add chain bridge proxmox-firewall-guests do-reject
+add chain bridge proxmox-firewall-guests pre-vm-out
 add chain bridge proxmox-firewall-guests vm-out {type filter hook prerouting 
priority 0; policy accept;}
+add chain bridge proxmox-firewall-guests pre-vm-in
 add chain bridge proxmox-firewall-guests vm-in {type filter hook postrouting 
priority 0; policy accept;}
 
 flush chain inet proxmox-firewall do-reject
@@ -64,7 +66,9 @@ flush chain bridge proxmox-firewall-guests block-ndp-out
 flush chain bridge proxmox-firewall-guests allow-ra-out
 flush chain bridge proxmox-firewall-guests block-ra-out
 flush chain bridge proxmox-firewall-guests do-reject
+flush chain bridge proxmox-firewall-guests pre-vm-out
 flush chain bridge proxmox-firewall-guests vm-out
+flush chain bridge proxmox-firewall-guests pre-vm-in
 flush chain bridge proxmox-firewall-guests vm-in
 
 table inet proxmox-firewall {
@@ -295,16 +299,22 @@ table bridge proxmox-firewall-guests {
 drop
 }
 
+chain pre-vm-out {
+meta protocol != arp ct state vmap { established : accept, related : 
accept, invalid : drop }
+}
+
 chain vm-out {
 type filter hook prerouting priority 0; policy accept;
-ether type != arp ct state vmap { established : accept, related : 
accept, invalid : drop }
 iifname vmap @vm-map-out
 }
 
+chain pre-vm-in {
+meta protocol != arp ct state vmap { established : accept, related : 
accept, invalid : drop }
+meta protocol arp accept
+}
+
 chain vm-in {
 type filter hook postrouting priority 0; policy accept;
-ether type != arp ct state vmap { established : accept, related : 
accept, invalid : drop }
-ether type arp accept
 oifname vmap @vm-map-in
 }
 }
diff --git a/proxmox-firewall/src/firewall.rs b/proxmox-firewall/src/firewall.rs
index 4c85ea2..4ea81df 100644
--- a/proxmox-firewall/src/firewall.rs
+++ b/proxmox-firewall/src/firewall.rs
@@ -758,7 +758,16 @@ impl Firewall {
 
 let chain = Self::guest_chain(direction, vmid);
 
-commands.append(&mut vec![Add::chain(chain.clone()), 
Flush::chain(chain)]);
+let pre_chain = match direction {
+Direction::In => "pre-vm-in",
+Direction::Out => "pre-vm-out",
+};
+
+commands.append(&mut vec![
+Add::chain(chain.clone()),
+Flush::chain(chain.clone()),
+Add::rule(AddRule::from_statement(chain, 
Statement::jump(pre_chain))),
+]);
 
 Ok(())
 }
diff --git a/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap 
b/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap
index 669bad9..e1953e0 100644
--- a/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap
+++ b/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap
@@ -2543,6 +2543,22 @@ expression: "firewall.full_host_fw().expect(\"firewall 
can be generated\")"
 }
   }
 },
+{
+  "add": {
+"rule": {
+  "family": "bridge",
+  "table": "proxmox-firewall-guests",
+  "chain": "guest-100-in",
+  "expr": [
+{
+  "jump": {
+"target": "pre-vm-in"
+  }
+}
+  ]
+}
+  }
+},
 {
   "add": {
 "chain": {
@@ -2561,6 +2577,22 @@ expression: "firewall.full_host_fw().expect(\"firewall 
can be generated\")"
 }
   }
 },
+{
+  "add": {
+"rule": {
+  "family": "bridge",
+  "table": "proxmox-firewall-guests",
+  "chain": "guest-100-out",
+  "expr": [
+{
+  "jump": {
+"target": "pre-vm-out"
+  }
+}
+  

[pve-devel] [PATCH proxmox-firewall 1/3] cargo: bump dependencies of proxmox-ve-config

2024-07-03 Thread Stefan Hanreich
Signed-off-by: Stefan Hanreich 
---
 proxmox-ve-config/Cargo.toml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/proxmox-ve-config/Cargo.toml b/proxmox-ve-config/Cargo.toml
index cc689c8..b0f3434 100644
--- a/proxmox-ve-config/Cargo.toml
+++ b/proxmox-ve-config/Cargo.toml
@@ -20,6 +20,6 @@ serde_json = "1"
 serde_plain = "1"
 serde_with = "2.3.3"
 
-proxmox-schema = "3.1.0"
-proxmox-sys = "0.5.3"
+proxmox-schema = "3.1.1"
+proxmox-sys = "0.5.8"
 proxmox-sortable-macro = "0.1.3"
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox-firewall 3/3] guest: match arp packets via meta

2024-07-03 Thread Stefan Hanreich
When matching via ether type, VLAN packets are not matched. This can
cause ARP packets encapsulated in VLAN frames to be dropped.

Signed-off-by: Stefan Hanreich 
---
 proxmox-firewall/src/firewall.rs   |  2 +-
 .../tests/snapshots/integration_tests__firewall.snap   | 10 --
 2 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/proxmox-firewall/src/firewall.rs b/proxmox-firewall/src/firewall.rs
index 4ea81df..941aa20 100644
--- a/proxmox-firewall/src/firewall.rs
+++ b/proxmox-firewall/src/firewall.rs
@@ -538,7 +538,7 @@ impl Firewall {
 
 // we allow outgoing ARP, except if blocked by the MAC filter above
 let arp_rule = vec![
-Match::new_eq(Payload::field("ether", "type"), 
Expression::from("arp")).into(),
+Match::new_eq(Meta::new("protocol"), 
Expression::from("arp")).into(),
 Statement::make_accept(),
 ];
 
diff --git a/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap 
b/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap
index e1953e0..40d4405 100644
--- a/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap
+++ b/proxmox-firewall/tests/snapshots/integration_tests__firewall.snap
@@ -2961,9 +2961,8 @@ expression: "firewall.full_host_fw().expect(\"firewall 
can be generated\")"
   "match": {
 "op": "==",
 "left": {
-  "payload": {
-"protocol": "ether",
-"field": "type"
+  "meta": {
+"key": "protocol"
   }
 },
 "right": "arp"
@@ -3623,9 +3622,8 @@ expression: "firewall.full_host_fw().expect(\"firewall 
can be generated\")"
   "match": {
 "op": "==",
 "left": {
-  "payload": {
-"protocol": "ether",
-"field": "type"
+  "meta": {
+"key": "protocol"
   }
 },
 "right": "arp"
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH storage] volume import: assume target API version is at least 9

2024-07-03 Thread Fiona Ebner
Am 10.06.24 um 11:04 schrieb Fiona Ebner:
> The storage API version has been bumped to at least 9 since
> libpve-storage = 7.0-4. If the source node is on Proxmox VE 8, where
> this change will come in, then the target node can be assumed to be
> running either Proxmox VE 8 or, during upgrade, the latest version of
> Proxmox VE 7.4, so it's safe to assume a storage API version of at
> least 9 in all cases.
> 
> As reported by Maximiliano, the fact that the 'apiinfo' call was
> guarded with a quiet eval could lead to strange errors for replication
> on a customer system where an SSH connection could not always be
> established, because the target's API version would fall back to 1.
> Because of that, the '-base' argument would be missing for the import
> call on the target which would in turn lead to an error about the
> target ZFS volume already existing (rather than doing an incremental
> sync).
> 
> Reported-by: Maximiliano Sandoval 
> Signed-off-by: Fiona Ebner 

applied, with Max's R-b, thanks for that!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] fix #5572: avoid warning about uninitialized value when cloning cloudinit disk

2024-07-03 Thread Thomas Lamprecht
Am 03/07/2024 um 10:40 schrieb Fiona Ebner:
> Some callers like the move disk API endpoint do not pass an explicit
> completion argument. This is not an issue in general, because
> qemu_drive_mirror_monitor() defaults to 'complete'. However, there was
> a string comparision for the cloudinit case that can trigger a warning
> about the value being uninitialized.
> 
> Signed-off-by: Fiona Ebner 
> ---
>  PVE/QemuServer.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] autocomplete: backup: also list archives from PBS storages and without compressor extension

2024-07-03 Thread Thomas Lamprecht
Am 14/06/2024 um 13:29 schrieb Fiona Ebner:
> While archives with unknown or undetermined subtype could be shown,
> this is only for autocompletion, so users can still specify those
> manually if required.
> 
> Signed-off-by: Fiona Ebner 
> ---
>  PVE/QemuServer.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage v2] fix-4272: btrfs: add rename feature

2024-07-03 Thread Maximiliano Sandoval
Adds the ability to change the owner of a guest image.

Btrfs does not need special commands to rename a subvolume and this can
be achieved the same as in Storage/plugin.pm's rename_volume taking
special care of how the directory structure used by Btrfs.

Signed-off-by: Maximiliano Sandoval 
---
 src/PVE/Storage/BTRFSPlugin.pm | 40 ++
 1 file changed, 40 insertions(+)

diff --git a/src/PVE/Storage/BTRFSPlugin.pm b/src/PVE/Storage/BTRFSPlugin.pm
index 42815cb..9f71d78 100644
--- a/src/PVE/Storage/BTRFSPlugin.pm
+++ b/src/PVE/Storage/BTRFSPlugin.pm
@@ -7,6 +7,7 @@ use base qw(PVE::Storage::Plugin);
 
 use Fcntl qw(S_ISDIR O_WRONLY O_CREAT O_EXCL);
 use File::Basename qw(basename dirname);
+use File::Copy 'move';
 use File::Path qw(mkpath);
 use IO::Dir;
 use POSIX qw(EEXIST);
@@ -618,6 +619,9 @@ sub volume_has_feature {
base => { qcow2 => 1, raw => 1, vmdk => 1 },
current => { qcow2 => 1, raw => 1, vmdk => 1 },
},
+   rename => {
+   current => { raw => 1 },
+   },
 };
 
 my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) = 
$class->parse_volname($volname);
@@ -930,4 +934,40 @@ sub volume_import {
 return "$storeid:$volname";
 }
 
+sub rename_volume {
+my ($class, $scfg, $storeid, $source_volname, $target_vmid, 
$target_volname) = @_;
+die "no path found\n" if !$scfg->{path};
+
+my (
+   undef,
+   undef,
+   undef,
+   undef,
+   undef
+   undef,
+   $format
+) = $class->parse_volname($source_volname);
+
+my $ppath = $class->filesystem_path($scfg, $source_volname);
+
+$target_volname = $class->find_free_diskname($storeid, $scfg, 
$target_vmid, $format, 1)
+   if !$target_volname;
+$target_volname = "$target_vmid/$target_volname";
+
+my $basedir = $class->get_subdir($scfg, 'images');
+
+mkpath "${basedir}/${target_vmid}";
+my $source_dir = raw_name_to_dir($source_volname);
+my $target_dir = raw_name_to_dir($target_volname);
+
+my $old_path = "${basedir}/${source_dir}";
+my $new_path = "${basedir}/${target_dir}";
+
+die "target volume '${target_volname}' already exists\n" if -e $new_path;
+move $old_path, $new_path ||
+   die "rename '$old_path' to '$new_path' failed - $!\n";
+
+return "${storeid}:$target_volname";
+}
+
 1
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server 1/2] api: add missing use statements

2024-07-03 Thread Thomas Lamprecht
Am 12/06/2024 um 14:15 schrieb Fiona Ebner:
> ---
>  PVE/API2/Qemu.pm | 3 +++
>  1 file changed, 3 insertions(+)
> 
>

applied both patches, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH docs] backup: mention where fleecing can be configured

2024-07-03 Thread Fabian Grünbichler
On April 29, 2024 4:49 pm, Fiona Ebner wrote:
> Reported in the community forum:
> https://forum.proxmox.com/threads/145955/post-658380
> 
> Signed-off-by: Fiona Ebner 
> ---
>  vzdump.adoc | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/vzdump.adoc b/vzdump.adoc
> index 79d4bbc..9b1cb61 100644
> --- a/vzdump.adoc
> +++ b/vzdump.adoc
> @@ -148,8 +148,12 @@ sectors will be limited by the speed of the backup 
> target.
>  With backup fleecing, such old data is cached in a fleecing image rather than
>  sent directly to the backup target. This can help guest IO performance and 
> even
>  prevent hangs in certain scenarios, at the cost of requiring more storage 
> space.
> +
>  Use e.g. `vzdump 123 --fleecing enabled=1,storage=local-lvm` to enable backup
> -fleecing, with fleecing images created on the storage `local-lvm`.
> +fleecing, with fleecing images created on the storage `local-lvm`. As always,
> +you can set the option for specific backup jobs, or as a node-wide fallback 
> via
> +the xref:vzdump_configuration[configuration options]. In the UI, fleecing 
> can be
> +configured in the 'Advanced' tab when editing a backup job.
>  
>  The fleecing storage should be a fast local storage, with thin provisioning 
> and
>  discard support. Examples are LVM-thin, RBD, ZFS with `sparse 1` in the 
> storage
> -- 
> 2.39.2
> 
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-firmware] d/control: add Conflicts and Replaces for ath9k-htc firmware

2024-07-03 Thread Fabian Grünbichler
On May 17, 2024 11:07 am, Fiona Ebner wrote:
> Reported in the community forum [0].
> 
> Both firmware files that are in the package [1]
> 
>> /lib/firmware/ath9k_htc/htc_7010-1.4.0.fw
>> /lib/firmware/ath9k_htc/htc_9271-1.4.0.fw
> 
> seem to have been present since the linux-firmware submodule commit
> 3de1c437 ("Add v1.4.0 firmware for ath9k_htc.") which is present since
> tag 20190717.
> 
> [0]: https://forum.proxmox.com/threads/135758/post-664862
> [1]: https://packages.debian.org/bookworm/all/firmware-ath9k-htc/filelist
> 
> Signed-off-by: Fiona Ebner 
> ---
>  debian/control | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/debian/control b/debian/control
> index 3c21b90..1fc601b 100644
> --- a/debian/control
> +++ b/debian/control
> @@ -11,6 +11,7 @@ Architecture: all
>  Depends: ${misc:Depends},
>  Suggests: linux-image,
>  Conflicts: firmware-amd-graphics,
> +   firmware-ath9k-htc,
> firmware-atheros,
> firmware-bnx2,
> firmware-bnx2x,
> @@ -33,6 +34,7 @@ Conflicts: firmware-amd-graphics,
> firmware-siano,
> firmware-ti-connectivity,
>  Replaces: firmware-amd-graphics,
> +  firmware-ath9k-htc,
>firmware-atheros,
>firmware-bnx2,
>firmware-bnx2x,
> -- 
> 2.39.2
> 
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH ifupdown2 v2 1/1] fix #5197: do not run scripts ending with .dpkg-{old, new, tmp, dist}

2024-07-03 Thread Stefan Hanreich
This can lead to issue when upgrading from ifupdown to ifupdown2. The
particular issue this fixes occurs in the following scenario:

* Suppose there is a legacy Debian host with ifupdown and ifenslave
  installed that has a bond configured in /etc/network/interfaces.
* ifenslave installs a script /etc/network/if-pre-up.d/ifenslave.
* Now, an upgrade creates a second script
  /etc/network/if-pre-up.d/ifenslave.dpkg-new. As ifupdown executes
  network scripts via run-parts which ignores scripts with . in their
  name, ifenslave.dpkg-new has no effect.
* If the host switches over to ifupdown2 by installing it (removing
  ifupdown, keeping ifenslave) and reboots, the network will not come
  up:
  /etc/network/if-pre-up.d/ifenslave still exists, but is ignored
  by ifupdown2's bond addon [1]
  /etc/network/if-pre-up.d/ifenslave.dpkg-new is executed by ifupdown2
  because it executes all scripts in /etc/network/if-pre-up.d, even if
  their name contains a dot

This leads to ifreload failing on upgrades, which in turn causes
issues with the networking of upgraded hosts.

Also submitted upstream at [2]

[1] 
https://github.com/CumulusNetworks/ifupdown2/blob/ccdc386cfab70703b657fe7c0ffceb95448a9c2b/ifupdown2/addons/bond.py#L45
[2] https://github.com/CumulusNetworks/ifupdown2/pull/304

Signed-off-by: Stefan Hanreich 
Tested-by: Friedrich Weber 
Reviewed-by: Fabian Grünbichler 
---

Changes from v1 -> v2:
* Improved commit message of patch (thanks @Fabian!)

 ...dpkg-files-when-running-hook-scripts.patch | 55 +++
 debian/patches/series |  1 +
 2 files changed, 56 insertions(+)
 create mode 100644 
debian/patches/pve/0010-main-ignore-dpkg-files-when-running-hook-scripts.patch

diff --git 
a/debian/patches/pve/0010-main-ignore-dpkg-files-when-running-hook-scripts.patch
 
b/debian/patches/pve/0010-main-ignore-dpkg-files-when-running-hook-scripts.patch
new file mode 100644
index 000..4298c76
--- /dev/null
+++ 
b/debian/patches/pve/0010-main-ignore-dpkg-files-when-running-hook-scripts.patch
@@ -0,0 +1,55 @@
+From 1c4ef3796e18625f8f93aa49f071e759120a72ea Mon Sep 17 00:00:00 2001
+From: Stefan Hanreich 
+Date: Tue, 4 Jun 2024 16:17:54 +0200
+Subject: [PATCH] main: ignore dpkg files when running hook scripts
+
+Currently ifupdown2 executes scripts that are backed up by dpkg (e.g.
+foo.dpkg-old). This can lead to issues with hook scripts getting
+executed after upgrading ifupdown2 or packages that ship hook scripts
+(e.g. ifenslave).
+
+This also brings the behavior of ifupdown2 more in line with the
+behavior of ifupdown. ifupdown used run-parts for executing the hook
+scripts, which ignores dpkg-created files (among others).
+
+Signed-off-by: Stefan Hanreich 
+---
+ ifupdown2/ifupdown/ifupdownmain.py | 4 +++-
+ ifupdown2/ifupdown/utils.py| 6 ++
+ 2 files changed, 9 insertions(+), 1 deletion(-)
+
+diff --git a/ifupdown2/ifupdown/ifupdownmain.py 
b/ifupdown2/ifupdown/ifupdownmain.py
+index 51f5460..e6622f0 100644
+--- a/ifupdown2/ifupdown/ifupdownmain.py
 b/ifupdown2/ifupdown/ifupdownmain.py
+@@ -1540,7 +1540,9 @@ class ifupdownMain:
+ try:
+ module_list = os.listdir(msubdir)
+ for module in module_list:
+-if self.modules.get(module) or module in 
self.overridden_ifupdown_scripts:
++if (self.modules.get(module)
++or module in self.overridden_ifupdown_scripts
++or utils.is_dpkg_file(module)):
+ continue
+ self.script_ops[op].append(msubdir + '/' + module)
+ except Exception:
+diff --git a/ifupdown2/ifupdown/utils.py b/ifupdown2/ifupdown/utils.py
+index 05c7e48..3085e82 100644
+--- a/ifupdown2/ifupdown/utils.py
 b/ifupdown2/ifupdown/utils.py
+@@ -212,6 +212,12 @@ class utils():
+ # what we have in the cache (data retrieved via a netlink dump by
+ # nlmanager). nlmanager return all macs in lower-case
+ 
++_dpkg_suffixes = (".dpkg-old", ".dpkg-dist", ".dpkg-new", ".dpkg-tmp")
++
++@staticmethod
++def is_dpkg_file(name):
++return any(name.endswith(suffix) for suffix in utils._dpkg_suffixes)
++
+ @classmethod
+ def importName(cls, modulename, name):
+ """ Import a named object """
+-- 
+2.39.2
+
diff --git a/debian/patches/series b/debian/patches/series
index 557aa7f..d5772c9 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -7,6 +7,7 @@ 
pve/0006-openvswitch-ovs-ports-condone-regex-exclude-tap-veth.patch
 pve/0007-allow-vlan-tag-inside-vxlan-tunnel.patch
 pve/0008-lacp-bond-remove-bond-min-links-0-warning.patch
 pve/0009-gvgeb-fix-python-interpreter-shebang.patch
+pve/0010-main-ignore-dpkg-files-when-running-hook-scripts.patch
 upstream/0001-add-ipv6-slaac-support-inet6-auto-accept_ra.patch
 upstream/0001-addons-ethtool-add-rx-vlan-filter.patch
 upstream/0001-scheduler-import-traceback.patch
-- 
2.39.2



Re: [pve-devel] [PATCH ifupdown2 1/1] fix #5197: do not run scripts ending with .dpkg-{old, new, tmp, dist}

2024-07-03 Thread Stefan Hanreich
superseded by
https://lists.proxmox.com/pipermail/pve-devel/2024-July/064404.html

On 6/27/24 17:01, Stefan Hanreich wrote:
> This can lead to issue when upgrading from ifupdown to ifupdown2. The
> particular issue this fixes occurs in the following scenario:
> 
> * Suppose there is a legacy Debian host with ifupdown and ifenslave
>   installed that has a bond configured in /etc/network/interfaces.
> * ifenslave installs a script /etc/network/if-pre-up.d/ifenslave.
> * Now, an upgrade creates a second script
>   /etc/network/if-pre-up.d/ifenslave.dpkg-new. As ifupdown executes
>   network scripts via run-parts which ignores scripts with . in their
>   name, ifenslave.dpkg-new has no effect.
> * If the host switches over to ifupdown2 by installing it (removing
>   ifupdown, keeping ifenslave) and reboots, the network will not come
>   up:
>   /etc/network/if-pre-up.d/ifenslave still exists, but is ignored
>   by ifupdown2's bond addon [1]
>   /etc/network/if-pre-up.d/ifenslave.dpkg-new is executed by ifupdown2
>   because it executes all scripts in /etc/network/if-pre-up.d, even if
>   their name contains a dot
> 
> This leads to ifreload failing on upgrades, which in turn causes
> issues with the networking of upgraded hosts.
> 
> Also submitted upstream at [2]
> 
> [1] 
> https://github.com/CumulusNetworks/ifupdown2/blob/ccdc386cfab70703b657fe7c0ffceb95448a9c2b/ifupdown2/addons/bond.py#L45
> [2] https://github.com/CumulusNetworks/ifupdown2/pull/304
> 
> Signed-off-by: Stefan Hanreich 
> ---
>  ...dpkg-files-when-running-hook-scripts.patch | 54 +++
>  debian/patches/series |  1 +
>  2 files changed, 55 insertions(+)
>  create mode 100644 
> debian/patches/pve/0010-main-ignore-dpkg-files-when-running-hook-scripts.patch
> 
> diff --git 
> a/debian/patches/pve/0010-main-ignore-dpkg-files-when-running-hook-scripts.patch
>  
> b/debian/patches/pve/0010-main-ignore-dpkg-files-when-running-hook-scripts.patch
> new file mode 100644
> index 000..eea615f
> --- /dev/null
> +++ 
> b/debian/patches/pve/0010-main-ignore-dpkg-files-when-running-hook-scripts.patch
> @@ -0,0 +1,54 @@
> +From dbb759a1383cf736a0fa769c5c5827e1e7f8145c Mon Sep 17 00:00:00 2001
> +From: Stefan Hanreich 
> +Date: Tue, 4 Jun 2024 16:17:54 +0200
> +Subject: [PATCH] main: ignore dpkg files when running hook scripts
> +
> +Currently ifupdown2 executes scripts that are backed up by dpkg (e.g.
> +foo.dpkg-old). This can lead to issues with hook scripts getting
> +executed after upgrading ifupdown2 via dpkg, even though they should
> +not be executed.
> +
> +This also brings ifupdown2 closer on par with the behavior of
> +ifupdown, which did not execute hook scripts with dpkg suffixes.
> +
> +Signed-off-by: Stefan Hanreich 
> +---
> + ifupdown2/ifupdown/ifupdownmain.py | 4 +++-
> + ifupdown2/ifupdown/utils.py| 6 ++
> + 2 files changed, 9 insertions(+), 1 deletion(-)
> +
> +diff --git a/ifupdown2/ifupdown/ifupdownmain.py 
> b/ifupdown2/ifupdown/ifupdownmain.py
> +index 51f5460..e6622f0 100644
> +--- a/ifupdown2/ifupdown/ifupdownmain.py
>  b/ifupdown2/ifupdown/ifupdownmain.py
> +@@ -1540,7 +1540,9 @@ class ifupdownMain:
> + try:
> + module_list = os.listdir(msubdir)
> + for module in module_list:
> +-if self.modules.get(module) or module in 
> self.overridden_ifupdown_scripts:
> ++if (self.modules.get(module)
> ++or module in self.overridden_ifupdown_scripts
> ++or utils.is_dpkg_file(module)):
> + continue
> + self.script_ops[op].append(msubdir + '/' + module)
> + except Exception:
> +diff --git a/ifupdown2/ifupdown/utils.py b/ifupdown2/ifupdown/utils.py
> +index 05c7e48..3085e82 100644
> +--- a/ifupdown2/ifupdown/utils.py
>  b/ifupdown2/ifupdown/utils.py
> +@@ -212,6 +212,12 @@ class utils():
> + # what we have in the cache (data retrieved via a netlink dump by
> + # nlmanager). nlmanager return all macs in lower-case
> + 
> ++_dpkg_suffixes = (".dpkg-old", ".dpkg-dist", ".dpkg-new", ".dpkg-tmp")
> ++
> ++@staticmethod
> ++def is_dpkg_file(name):
> ++return any(name.endswith(suffix) for suffix in utils._dpkg_suffixes)
> ++
> + @classmethod
> + def importName(cls, modulename, name):
> + """ Import a named object """
> +-- 
> +2.39.2
> +
> diff --git a/debian/patches/series b/debian/patches/series
> index 557aa7f..d5772c9 100644
> --- a/debian/patches/series
> +++ b/debian/patches/series
> @@ -7,6 +7,7 @@ 
> pve/0006-openvswitch-ovs-ports-condone-regex-exclude-tap-veth.patch
>  pve/0007-allow-vlan-tag-inside-vxlan-tunnel.patch
>  pve/0008-lacp-bond-remove-bond-min-links-0-warning.patch
>  pve/0009-gvgeb-fix-python-interpreter-shebang.patch
> +pve/0010-main-ignore-dpkg-files-when-running-hook-scripts.patch
>  upstream/0001-add-ipv6-slaac-support-inet6-auto-a

[pve-devel] [PATCH v3 storage] fix #5191: api, cli: implement moving a volume between storages

2024-07-03 Thread Filip Schauer
Add the ability to move a backup, ISO, container template or snippet
between storages and nodes via an API method. Moving a VMA backup to a
Proxmox Backup Server requires the proxmox-vma-to-pbs package to be
installed. Currently only VMA backups can be moved to a Proxmox Backup
Server and moving backups from a Proxmox Backup Server is not yet
supported.

The method can be called from the PVE shell with `pvesm move-volume`:

```
pvesm move-volume   [--target-node ] 
[--delete]
```

For example to move a VMA backup to a Proxmox Backup Server:

```
pvesm move-volume \
local:backup/vzdump-qemu-100-2024_06_25-13_08_56.vma.zst pbs
```

Or move a container template to another node and delete the source:

```
pvesm move-volume \
local:vztmpl/devuan-4.0-standard_4.0_amd64.tar.gz local \
--target-node pvenode2 --delete
```

Or use curl to call the API method:

```
curl 
https://$APINODE:8006/api2/json/nodes/$SOURCENODE/storage/$SOURCESTORAGE/content/$SOURCEVOLUME
 \
--insecure --cookie "$(
---
This patch depends on
[PATCH backup-qemu/vma-to-pbs 0/2] add support for notes and logs
https://lists.proxmox.com/pipermail/pbs-devel/2024-May/009445.html

As of the time of writing this the version of proxmox-vma-to-pbs
specified in debian/control does not exist.

Changes since v2:
* Specify permissions for move method
* Add success message to move method
* Limit the move method to non-vdisk volumes
* Check that source and target are not the same in the move method
* Remove the unnecessary movevolume method from pvesm and make the
  move-volume command call the move API method directly
* Fail when trying to move a protected volume with the delete option
  enabled, instead of ignoring the protection
* Change "not yet supported" to "not supported" in messages indicating
  unimplemented features
* Process auxiliary files first when moving a volume locally on a node
* Move a volume instead of copying it when trying to move a volume
  locally on a node with the delete option enabled.
* Use the more general `path` function instead of `filesystem_path` to
  get the path of a volume
* Loosen the required privileges to move an ISO or a container template,
  or when the delete option is not set.
* Move the volume_move sub from PVE::Storage to
  PVE::API2::Storage::Content since it is only used there.
* Explicitly check that storages are path-based in volume_move,
  except when moving a vma to a Proxmox Backup Server

Changes since v1:
* Rename pvesm command to move-volume
* Add a delete option to control whether the source volume should be
  kept
* Move the API method to the POST endpoint of
  /nodes/{node}/storage/{storage}/content/{volume}, replacing the
  experimental copy method that has not been used since its introduction
  in October 2011 883eeea6.
* Implement migrating volumes between nodes

 debian/control  |   1 +
 src/PVE/API2/Storage/Content.pm | 245 +++-
 src/PVE/CLI/pvesm.pm|   2 +
 src/PVE/Storage.pm  |  74 ++
 src/PVE/Storage/Plugin.pm   | 106 ++
 5 files changed, 342 insertions(+), 86 deletions(-)

diff --git a/debian/control b/debian/control
index d7afa98..d45b1f6 100644
--- a/debian/control
+++ b/debian/control
@@ -42,6 +42,7 @@ Depends: ceph-common (>= 12.2~),
  nfs-common,
  proxmox-backup-client (>= 2.1.10~),
  proxmox-backup-file-restore,
+ proxmox-vma-to-pbs (>= 0.0.2),
  pve-cluster (>= 5.0-32),
  smartmontools,
  smbclient,
diff --git a/src/PVE/API2/Storage/Content.pm b/src/PVE/API2/Storage/Content.pm
index fe0ad4a..7d789d8 100644
--- a/src/PVE/API2/Storage/Content.pm
+++ b/src/PVE/API2/Storage/Content.pm
@@ -2,7 +2,9 @@ package PVE::API2::Storage::Content;
 
 use strict;
 use warnings;
-use Data::Dumper;
+
+use File::Basename;
+use File::Copy qw(copy move);
 
 use PVE::SafeSyslog;
 use PVE::Cluster;
@@ -483,15 +485,173 @@ __PACKAGE__->register_method ({
return $upid;
 }});
 
+sub volume_move {
+my ($cfg, $source_volid, $target_storeid, $delete) = @_;
+
+my ($source_storeid, $source_volname) = 
PVE::Storage::parse_volume_id($source_volid, 0);
+
+die "source and target storage cannot be the same\n" if ($source_storeid 
eq $target_storeid);
+
+PVE::Storage::activate_storage($cfg, $source_storeid);
+my $source_scfg = PVE::Storage::storage_config($cfg, $source_storeid);
+my $source_plugin = PVE::Storage::Plugin->lookup($source_scfg->{type});
+my ($vtype) = $source_plugin->parse_volname($source_volname);
+
+die "source storage '$source_storeid' does not support volumes of type 
'$vtype'\n"
+   if !$source_scfg->{content}->{$vtype};
+
+PVE::Storage::activate_storage($cfg, $target_storeid);
+my $target_scfg = PVE::Storage::storage_config($cfg, $target_storeid);
+die "target storage '$target_storeid' does not support volumes of type 
'$vtype'\n"
+   if !$target_scfg->{content}->{$vtype};
+
+   

Re: [pve-devel] [PATCH v2 storage] fix #5191: api, cli: implement moving a volume between storages

2024-07-03 Thread Filip Schauer


On 26/06/2024 11:58, Fabian Grünbichler wrote:

@@ -1613,6 +1615,15 @@ sub volume_export {
run_command(['tar', @COMMON_TAR_FLAGS, '-cf', '-', '-C', $file, 
'.'],
output => '>&'.fileno($fh));
return;
+   } elsif ($format eq 'backup+size') {

nit: vma+size would be more in line with our current naming scheme
here.. even if it contains the extra attributes, 'backup' is quite
generic..



It is generic because it is not limited to VMA backups. This covers any
kind of backup.

Superseded by:
https://lists.proxmox.com/pipermail/pve-devel/2024-July/064406.html



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH storage] fix-4272: btrfs: add rename feature

2024-07-03 Thread Aaron Lauterer




On  2024-07-03  14:32, Maximiliano Sandoval wrote:

Aaron Lauterer  writes:


Works overall. I did not test edge cases like working around the BTRFS plugin to
have qcow2 files instead of the raw files in a subvol.


Thanks for testing!


[...]

I am not too familiar with how BTRFS and our plugin works, but looking at other
functions, like 'alloc_image' or 'clone_image' it seems that we do have checks
regarding the format in place. If it is not a 'raw' or 'subvol' we call the
SUPER from the main Plugin.pm instead of continuing in the BTRFS specific
implementations that use subvols.

This seems to be missing here, but it might be that it is fine if the "move" is
working in any way.


While it is true that others methods call the SUPER method, I didn't
find any other implementation using SUPER::rename_volume.


Not surprising, as this behavior seems to be a BTRFS specific one for 
the edge case that we are dealing with a regular file (non .raw) instead 
of how the plugin usually wants it. AFAICT this is a safety mechanism, 
just in case.


For example (IIRC): on BTRFS the layout if usually:
images/{vmid}/vm-{vmid}-disk-X/disk.raw

where the `vm-{vmid}-disk-X` part is a BTRFS subvolume. But it would 
also be possible that someone (manually) builds it like a regular 
directory based storage plugin:


images/{vmid}/vm-{vmid}-disk-X.qcow2

where everything is just a path/file. How well does the rename_volume 
code handle this? Or should it call the SUPER method?



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v2 qemu-server 3/4] backup: prepare: remove outdated QEMU version check

2024-07-03 Thread Fabian Grünbichler
On May 28, 2024 10:50 am, Fiona Ebner wrote:
> In Proxmox VE 8, the oldest supported QEMU version is 8.0, so a check
> for version 4.0.1 is not required anymore.
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> New in v2.
> 
>  PVE/VZDump/QemuServer.pm | 4 
>  1 file changed, 4 deletions(-)
> 
> diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
> index 8c97ee62..5248c6eb 100644
> --- a/PVE/VZDump/QemuServer.pm
> +++ b/PVE/VZDump/QemuServer.pm
> @@ -90,10 +90,6 @@ sub prepare {
>   if (!$volume->{included}) {
>   $self->loginfo("exclude disk '$name' '$volid' ($volume->{reason})");
>   next;
> - } elsif ($self->{vm_was_running} && $volume_config->{iothread} &&
> -  !PVE::QemuServer::Machine::runs_at_least_qemu_version($vmid, 
> 4, 0, 1)) {
> - die "disk '$name' '$volid' (iothread=on) can't use backup feature 
> with running QEMU " .
> - "version < 4.0.1! Either set backup=no for this drive or 
> upgrade QEMU and restart VM\n";
>   } else {
>   my $log = "include disk '$name' '$volid'";
>   if (defined(my $size = $volume_config->{size})) {
> -- 
> 2.39.2
> 
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v2 qemu-server 2/4] migration: handle replication: remove outdated and inaccurate check for QEMU version

2024-07-03 Thread Fabian Grünbichler
On May 28, 2024 10:50 am, Fiona Ebner wrote:
> In Proxmox VE 8, the oldest supported QEMU version is 8.0, so a
> check for version 4.2 is not required anymore. The check was also
> wrong, because it checked the installed version and not the currently
> running one.
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> New in v2.
> 
>  PVE/QemuMigrate.pm | 6 --
>  1 file changed, 6 deletions(-)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index d7ee4a5b..34fc46ee 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -544,12 +544,6 @@ sub handle_replication {
>   if $self->{opts}->{remote};
>  
>  if ($self->{running}) {
> -
> - my $version = PVE::QemuServer::kvm_user_version();
> - if (!min_version($version, 4, 2)) {
> - die "can't live migrate VM with replicated volumes, pve-qemu to old 
> (< 4.2)!\n"
> - }
> -
>   my @live_replicatable_volumes = $self->filter_local_volumes('online', 
> 1);
>   foreach my $volid (@live_replicatable_volumes) {
>   my $drive = $local_volumes->{$volid}->{drivename};
> -- 
> 2.39.2
> 
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 qemu-server 1/4] migration: avoid crash with heavy IO on local VM disk

2024-07-03 Thread Fabian Grünbichler
On May 28, 2024 10:50 am, Fiona Ebner wrote:
> There is a possibility that the drive-mirror job is not yet done when
> the migration wants to inactivate the source's blockdrives:
> 
>> bdrv_co_write_req_prepare: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' 
>> failed.
> 
> This can be prevented by using the 'write-blocking' copy mode (also
> called active mode) for the mirror. However, with active mode, the
> guest write speed is limited by the synchronous writes to the mirror
> target. For this reason, a way to start out in the faster 'background'
> mode and later switch to active mode was introduced in QEMU 8.2.
> 
> The switch is done once the mirror job for all drives is ready to be
> completed to reduce the time spent where guest IO is limited.
> 
> Reported rarely, but steadily over the years:
> https://forum.proxmox.com/threads/78954/post-353651
> https://forum.proxmox.com/threads/78954/post-380015
> https://forum.proxmox.com/threads/100020/post-431660
> https://forum.proxmox.com/threads/111831/post-482425
> https://forum.proxmox.com/threads/111831/post-499807
> https://forum.proxmox.com/threads/137849/
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> Changes in v2:
> * check for running QEMU version instead of installed version
> 
>  PVE/QemuMigrate.pm|  8 ++
>  PVE/QemuServer.pm | 41 +++
>  test/MigrationTest/QemuMigrateMock.pm |  6 
>  3 files changed, 55 insertions(+)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 33d5b2d1..d7ee4a5b 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -1145,6 +1145,14 @@ sub phase2 {
>   $self->log('info', "$drive: start migration to $nbd_uri");
>   PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $nbd_uri, $vmid, 
> undef, $self->{storage_migration_jobs}, 'skip', undef, $bwlimit, $bitmap);
>   }
> +
> + if (PVE::QemuServer::Machine::runs_at_least_qemu_version($vmid, 8, 2)) {
> + $self->log('info', "switching mirror jobs to actively synced mode");
> + PVE::QemuServer::qemu_drive_mirror_switch_to_active_mode(
> + $vmid,
> + $self->{storage_migration_jobs},
> + );
> + }
>  }
>  
>  $self->log('info', "starting online/live migration on $migrate_uri");
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 5df0c96d..d472e805 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -8122,6 +8122,47 @@ sub qemu_blockjobs_cancel {
>  }
>  }
>  
> +# Callers should version guard this (only available with a binary >= QEMU 
> 8.2)
> +sub qemu_drive_mirror_switch_to_active_mode {
> +my ($vmid, $jobs) = @_;
> +
> +my $switching = {};
> +
> +for my $job (sort keys $jobs->%*) {
> + print "$job: switching to actively synced mode\n";
> +
> + eval {
> + mon_cmd(
> + $vmid,
> + "block-job-change",
> + id => $job,
> + type => 'mirror',
> + 'copy-mode' => 'write-blocking',
> + );
> + $switching->{$job} = 1;
> + };
> + die "could not switch mirror job $job to active mode - $@\n" if $@;
> +}
> +
> +while (1) {
> + my $stats = mon_cmd($vmid, "query-block-jobs");
> +
> + my $running_jobs = {};
> + $running_jobs->{$_->{device}} = $_ for $stats->@*;
> +
> + for my $job (sort keys $switching->%*) {
> + if ($running_jobs->{$job}->{'actively-synced'}) {
> + print "$job: successfully switched to actively synced mode\n";
> + delete $switching->{$job};
> + }
> + }
> +
> + last if scalar(keys $switching->%*) == 0;
> +
> + sleep 1;
> +}

so what could be the cause here for a job not switching? and do we
really want to loop forever if it happens?

> +}
> +
>  # Check for bug #4525: drive-mirror will open the target drive with the same 
> aio setting as the
>  # source, but some storages have problems with io_uring, sometimes even 
> leading to crashes.
>  my sub clone_disk_check_io_uring {
> diff --git a/test/MigrationTest/QemuMigrateMock.pm 
> b/test/MigrationTest/QemuMigrateMock.pm
> index 1efabe24..f5b44424 100644
> --- a/test/MigrationTest/QemuMigrateMock.pm
> +++ b/test/MigrationTest/QemuMigrateMock.pm
> @@ -152,6 +152,9 @@ $MigrationTest::Shared::qemu_server_module->mock(
>   }
>   return;
>  },
> +qemu_drive_mirror_switch_to_active_mode => sub {
> + return;
> +},
>  set_migration_caps => sub {
>   return;
>  },
> @@ -185,6 +188,9 @@ $qemu_server_machine_module->mock(
>   if !defined($vm_status->{runningmachine});
>   return $vm_status->{runningmachine};
>  },
> +runs_at_least_qemu_version => sub {
> + return 1;
> +},
>  );
>  
>  my $ssh_info_module = Test::MockModule->new("PVE::SSHInfo");
> -- 
> 2.39.2
> 
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.pr

Re: [pve-devel] [RFC v2 qemu-server 4/4] move helper to check running QEMU version out of the 'Machine' module

2024-07-03 Thread Fabian Grünbichler
On May 28, 2024 10:50 am, Fiona Ebner wrote:
> The version of the running QEMU binary is not related to the machine
> version and so it's a bit confusing to have the helper in the
> 'Machine' module. It cannot live in the 'Helpers' module, because that
> would lead to a cyclic inclusion Helpers <-> Monitor. Thus,
> 'QMPHelpers' is chosen as the new home.
> 
> Signed-off-by: Fiona Ebner 

Acked-by: Fabian Grünbichler 

but needs the first patch to be applied, or a re-order to move this
first ;)

> ---
> 
> New in v2.
> 
>  PVE/QemuMigrate.pm|  3 ++-
>  PVE/QemuServer/Machine.pm | 12 
>  PVE/QemuServer/QMPHelpers.pm  | 13 +
>  test/MigrationTest/QemuMigrateMock.pm |  4 
>  test/run_config2command_tests.pl  |  4 ++--
>  5 files changed, 21 insertions(+), 15 deletions(-)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 34fc46ee..e71face4 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -30,6 +30,7 @@ use PVE::QemuServer::Helpers qw(min_version);
>  use PVE::QemuServer::Machine;
>  use PVE::QemuServer::Monitor qw(mon_cmd);
>  use PVE::QemuServer::Memory qw(get_current_memory);
> +use PVE::QemuServer::QMPHelpers;
>  use PVE::QemuServer;
>  
>  use PVE::AbstractMigrate;
> @@ -1140,7 +1141,7 @@ sub phase2 {
>   PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $nbd_uri, $vmid, 
> undef, $self->{storage_migration_jobs}, 'skip', undef, $bwlimit, $bitmap);
>   }
>  
> - if (PVE::QemuServer::Machine::runs_at_least_qemu_version($vmid, 8, 2)) {
> + if (PVE::QemuServer::QMPHelpers::runs_at_least_qemu_version($vmid, 8, 
> 2)) {
>   $self->log('info', "switching mirror jobs to actively synced mode");
>   PVE::QemuServer::qemu_drive_mirror_switch_to_active_mode(
>   $vmid,
> diff --git a/PVE/QemuServer/Machine.pm b/PVE/QemuServer/Machine.pm
> index cc92e7e6..a3917dae 100644
> --- a/PVE/QemuServer/Machine.pm
> +++ b/PVE/QemuServer/Machine.pm
> @@ -161,18 +161,6 @@ sub can_run_pve_machine_version {
>  return 0;
>  }
>  
> -# dies if a) VM not running or not exisiting b) Version query failed
> -# So, any defined return value is valid, any invalid state can be caught by 
> eval
> -sub runs_at_least_qemu_version {
> -my ($vmid, $major, $minor, $extra) = @_;
> -
> -my $v = PVE::QemuServer::Monitor::mon_cmd($vmid, 'query-version');
> -die "could not query currently running version for VM $vmid\n" if 
> !defined($v);
> -$v = $v->{qemu};
> -
> -return PVE::QemuServer::Helpers::version_cmp($v->{major}, $major, 
> $v->{minor}, $minor, $v->{micro}, $extra) >= 0;
> -}
> -
>  sub qemu_machine_pxe {
>  my ($vmid, $conf) = @_;
>  
> diff --git a/PVE/QemuServer/QMPHelpers.pm b/PVE/QemuServer/QMPHelpers.pm
> index d3a52327..0269ea46 100644
> --- a/PVE/QemuServer/QMPHelpers.pm
> +++ b/PVE/QemuServer/QMPHelpers.pm
> @@ -3,6 +3,7 @@ package PVE::QemuServer::QMPHelpers;
>  use warnings;
>  use strict;
>  
> +use PVE::QemuServer::Helpers;
>  use PVE::QemuServer::Monitor qw(mon_cmd);
>  
>  use base 'Exporter';
> @@ -45,4 +46,16 @@ sub qemu_objectdel {
>  return 1;
>  }
>  
> +# dies if a) VM not running or not exisiting b) Version query failed
> +# So, any defined return value is valid, any invalid state can be caught by 
> eval
> +sub runs_at_least_qemu_version {
> +my ($vmid, $major, $minor, $extra) = @_;
> +
> +my $v = PVE::QemuServer::Monitor::mon_cmd($vmid, 'query-version');
> +die "could not query currently running version for VM $vmid\n" if 
> !defined($v);
> +$v = $v->{qemu};
> +
> +return PVE::QemuServer::Helpers::version_cmp($v->{major}, $major, 
> $v->{minor}, $minor, $v->{micro}, $extra) >= 0;
> +}
> +
>  1;
> diff --git a/test/MigrationTest/QemuMigrateMock.pm 
> b/test/MigrationTest/QemuMigrateMock.pm
> index f5b44424..11c58c08 100644
> --- a/test/MigrationTest/QemuMigrateMock.pm
> +++ b/test/MigrationTest/QemuMigrateMock.pm
> @@ -188,6 +188,10 @@ $qemu_server_machine_module->mock(
>   if !defined($vm_status->{runningmachine});
>   return $vm_status->{runningmachine};
>  },
> +);
> +
> +my $qemu_server_qmphelpers_module = 
> Test::MockModule->new("PVE::QemuServer::QMPHelpers");
> +$qemu_server_qmphelpers_module->mock(
>  runs_at_least_qemu_version => sub {
>   return 1;
>  },
> diff --git a/test/run_config2command_tests.pl 
> b/test/run_config2command_tests.pl
> index 7212acc4..d48ef562 100755
> --- a/test/run_config2command_tests.pl
> +++ b/test/run_config2command_tests.pl
> @@ -16,7 +16,7 @@ use PVE::SysFSTools;
>  use PVE::QemuConfig;
>  use PVE::QemuServer;
>  use PVE::QemuServer::Monitor;
> -use PVE::QemuServer::Machine;
> +use PVE::QemuServer::QMPHelpers;
>  use PVE::QemuServer::CPUConfig;
>  
>  my $base_env = {
> @@ -472,7 +472,7 @@ sub do_test($) {
>  # check if QEMU version set correctly and test version_cmp
>  (my $qemu_major = get_test_qemu_version()) =~ s/\..*$//;
>  die "runs_at

Re: [pve-devel] [PATCH v2 qemu-server 1/4] migration: avoid crash with heavy IO on local VM disk

2024-07-03 Thread Fiona Ebner
Am 03.07.24 um 15:15 schrieb Fabian Grünbichler:
> On May 28, 2024 10:50 am, Fiona Ebner wrote:
>> +eval {
>> +mon_cmd(
>> +$vmid,
>> +"block-job-change",
>> +id => $job,
>> +type => 'mirror',
>> +'copy-mode' => 'write-blocking',
>> +);
>> +$switching->{$job} = 1;
>> +};
>> +die "could not switch mirror job $job to active mode - $@\n" if $@;
>> +}
>> +
>> +while (1) {
>> +my $stats = mon_cmd($vmid, "query-block-jobs");
>> +
>> +my $running_jobs = {};
>> +$running_jobs->{$_->{device}} = $_ for $stats->@*;
>> +
>> +for my $job (sort keys $switching->%*) {
>> +if ($running_jobs->{$job}->{'actively-synced'}) {
>> +print "$job: successfully switched to actively synced mode\n";
>> +delete $switching->{$job};
>> +}
>> +}
>> +
>> +last if scalar(keys $switching->%*) == 0;
>> +
>> +sleep 1;
>> +}
> 
> so what could be the cause here for a job not switching? and do we
> really want to loop forever if it happens?
> 

That should never happen. The 'block-job-change' QMP command already
succeeded. That means further writes will be done synchronously to the
target. Once the remaining dirty parts have been mirrored by the
background iteration, the actively-synced flag will be set and we break
out of the loop.

We got to the ready condition already before doing the switch, getting
there again is even easier after the switch:
https://gitlab.com/qemu-project/qemu/-/blob/stable-9.0/block/mirror.c?ref_type=heads#L1078


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 qemu-server 1/4] migration: avoid crash with heavy IO on local VM disk

2024-07-03 Thread Fiona Ebner
Am 03.07.24 um 15:44 schrieb Fiona Ebner:
> Am 03.07.24 um 15:15 schrieb Fabian Grünbichler:
>> On May 28, 2024 10:50 am, Fiona Ebner wrote:
>>> +   eval {
>>> +   mon_cmd(
>>> +   $vmid,
>>> +   "block-job-change",
>>> +   id => $job,
>>> +   type => 'mirror',
>>> +   'copy-mode' => 'write-blocking',
>>> +   );
>>> +   $switching->{$job} = 1;
>>> +   };
>>> +   die "could not switch mirror job $job to active mode - $@\n" if $@;
>>> +}
>>> +
>>> +while (1) {
>>> +   my $stats = mon_cmd($vmid, "query-block-jobs");
>>> +
>>> +   my $running_jobs = {};
>>> +   $running_jobs->{$_->{device}} = $_ for $stats->@*;
>>> +
>>> +   for my $job (sort keys $switching->%*) {
>>> +   if ($running_jobs->{$job}->{'actively-synced'}) {
>>> +   print "$job: successfully switched to actively synced mode\n";
>>> +   delete $switching->{$job};
>>> +   }
>>> +   }
>>> +
>>> +   last if scalar(keys $switching->%*) == 0;
>>> +
>>> +   sleep 1;
>>> +}
>>
>> so what could be the cause here for a job not switching? and do we
>> really want to loop forever if it happens?
>>
> 
> That should never happen. The 'block-job-change' QMP command already
> succeeded. That means further writes will be done synchronously to the
> target. Once the remaining dirty parts have been mirrored by the
> background iteration, the actively-synced flag will be set and we break
> out of the loop.
> 
> We got to the ready condition already before doing the switch, getting
> there again is even easier after the switch:
> https://gitlab.com/qemu-project/qemu/-/blob/stable-9.0/block/mirror.c?ref_type=heads#L1078
> 

Well, "should". If a job fails after switching, then we'd actually be
stuck. Will write a v2 that is robust against that.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-storage] qcow2 format: enable subcluster allocation by default

2024-07-03 Thread Alexandre Derumier via pve-devel
--- Begin Message ---
extended_l2 is an optimisation to reduce write amplification.
Currently,without it, when a vm write 4k, a full 64k cluster
need to be writen.

When enabled, the cluster is splitted in 32 subclusters.

We use a 128k cluster by default, to have 32 * 4k subclusters

https://blogs.igalia.com/berto/2020/12/03/subcluster-allocation-for-qcow2-images/
https://static.sched.com/hosted_files/kvmforum2020/d9/qcow2-subcluster-allocation.pdf

some stats for 4k randwrite benchmark

Cluster size   Without subclusters With subclusters
16 KB  5859 IOPS   8063 IOPS
32 KB  5674 IOPS   11107 IOPS
64 KB  2527 IOPS   12731 IOPS
128 KB 1576 IOPS   11808 IOPS
256 KB 976 IOPS 9195 IOPS
512 KB 510 IOPS 7079 IOPS
1 MB   448 IOPS 3306 IOPS
2 MB   262 IOPS 2269 IOPS

Signed-off-by: Alexandre Derumier 
---
 src/PVE/Storage/Plugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 6444390..31b20fe 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -561,7 +561,7 @@ sub preallocation_cmd_option {
die "preallocation mode '$prealloc' not supported by format '$fmt'\n"
if !$QCOW2_PREALLOCATION->{$prealloc};
 
-   return "preallocation=$prealloc";
+   return "preallocation=$prealloc,extended_l2=on,cluster_size=128k";
 } elsif ($fmt eq 'raw') {
$prealloc = $prealloc // 'off';
$prealloc = 'off' if $prealloc eq 'metadata';
-- 
2.39.2


--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel