Re: [pve-devel] [PATCH v4 qemu-server 11/16] memory: don't use foreach_reversedimm for unplug

2023-02-23 Thread DERUMIER, Alexandre

Le mercredi 22 février 2023 à 16:19 +0100, Fiona Ebner a écrit :
> > dimm0 (node 0): 536870912
> > dimm34 (node 0): 1073741824
> > $VAR1 = {
> >    'base-memory' => 1073741824,
> >    'plugged-memory' => 1610612736
> >  };
> 
> And this patch fixes that :)

Oh, yes. I was suspicious about that too, but never had time to debug.
Since years, I had problem to unplug last dimm, I was thinking it was 
the memory set as unmovable by guest :)


I'll put the 2 patches at the begin of the patches series.

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] start: make not being able to set polling interval for ballooning non-critical

2023-02-23 Thread Fiona Ebner
The guest will be running, so it's misleading to fail the start task
here. Also ensures that we clean up the hibernation state upon resume
even if there is an error here, which did not happen previously[0].

[0]: https://forum.proxmox.com/threads/123159/

Signed-off-by: Fiona Ebner 
---
 PVE/QemuServer.pm | 17 ++---
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a1a8b937..ff1e3c72 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6053,13 +6053,16 @@ sub vm_start_nolock {
 }
 
if (!defined($conf->{balloon}) || $conf->{balloon}) {
-   mon_cmd(
-   $vmid,
-   'qom-set',
-   path => "machine/peripheral/balloon0",
-   property => "guest-stats-polling-interval",
-   value => 2
-   );
+   eval {
+   mon_cmd(
+   $vmid,
+   'qom-set',
+   path => "machine/peripheral/balloon0",
+   property => "guest-stats-polling-interval",
+   value => 2
+   );
+   };
+   log_warn("could not set polling interval for ballooning - $@") if $@;
 }
 
 if ($resume) {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox-widget-toolkit] Fix Unnecessarry parentheses error

2023-02-23 Thread Noel Ullreich
As is, the code will throw an `ERR : line 162 col 22: no-extra-parens -
Unnecessary parentheses around expression. (*)`. Since `data.normalized`
will be evaluated as false if it is null anyway, adding a nullish operator
`??` is not needed.

Signed-off-by: Noel Ullreich 
---
 src/window/DiskSmart.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/window/DiskSmart.js b/src/window/DiskSmart.js
index b538ea1..834f065 100644
--- a/src/window/DiskSmart.js
+++ b/src/window/DiskSmart.js
@@ -159,7 +159,7 @@ Ext.define('Proxmox.window.DiskSmart', {
{
name: 'real-value',
// FIXME remove with next major release (PBS 3.0)
-   calculate: data => (data.normalized ?? false) ? data.raw : 
data.value,
+   calculate: data => data.normalized ? data.raw : data.value,
},
{
name: 'real-normalized',
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH proxmox-widget-toolkit] Fix Unnecessarry parentheses error

2023-02-23 Thread Noel Ullreich
Sorry, didn't see that a patch was sent for this on the 15th already: 
https://lists.proxmox.com/pipermail/pve-devel/2023-February/055798.html.


This patch can be ignored.

On 23-02-2023 11:14, Noel Ullreich wrote:

As is, the code will throw an `ERR : line 162 col 22: no-extra-parens -
Unnecessary parentheses around expression. (*)`. Since `data.normalized`
will be evaluated as false if it is null anyway, adding a nullish operator
`??` is not needed.

Signed-off-by: Noel Ullreich 
---
  src/window/DiskSmart.js | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/window/DiskSmart.js b/src/window/DiskSmart.js
index b538ea1..834f065 100644
--- a/src/window/DiskSmart.js
+++ b/src/window/DiskSmart.js
@@ -159,7 +159,7 @@ Ext.define('Proxmox.window.DiskSmart', {
{
name: 'real-value',
// FIXME remove with next major release (PBS 3.0)
-   calculate: data => (data.normalized ?? false) ? data.raw : 
data.value,
+   calculate: data => data.normalized ? data.raw : data.value,
},
{
name: 'real-normalized',



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] start: make not being able to set polling interval for ballooning non-critical

2023-02-23 Thread Thomas Lamprecht
Am 23/02/2023 um 10:49 schrieb Fiona Ebner:
> The guest will be running, so it's misleading to fail the start task
> here. Also ensures that we clean up the hibernation state upon resume
> even if there is an error here, which did not happen previously[0].
> 
> [0]: https://forum.proxmox.com/threads/123159/
> 
> Signed-off-by: Fiona Ebner 
> ---
>  PVE/QemuServer.pm | 17 ++---
>  1 file changed, 10 insertions(+), 7 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/3] vzdump: Add VM QGA option to skip `fs-freeze`/`fs-thaw` on backup

2023-02-23 Thread Thomas Lamprecht
Am 01/02/2023 um 13:59 schrieb Christoph Heiss:
> Signed-off-by: Christoph Heiss 
> ---
>  PVE/QemuServer.pm| 8 +++-
>  PVE/VZDump/QemuServer.pm | 5 +
>  2 files changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index e4d1a70..e409db1 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -160,7 +160,13 @@ my $agent_fmt = {
>   description => "Run fstrim after moving a disk or migrating the VM.",
>   type => 'boolean',
>   optional => 1,
> - default => 0
> + default => 0,
> +},
> +fsfreeze_thaw => {

Please rename this to:

'freeze-fs-on-backup'


> + description => "Freeze/thaw filesystems on backup for consistency.",

detail/nit: s/filesystems/guest filesystem/

> + type => 'boolean',
> + optional => 1,
> + default => 1,
>  },
>  type => {
>   description => "Select the agent type",
> diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
> index 0eb5ec6..5ff46f7 100644
> --- a/PVE/VZDump/QemuServer.pm
> +++ b/PVE/VZDump/QemuServer.pm
> @@ -862,6 +862,11 @@ sub qga_fs_freeze {
>   return;
>  }
> 
> +if (!PVE::QemuServer::get_qga_key($self->{vmlist}->{$vmid}, 
> 'fsfreeze_thaw')) {

this is wrong as get_qga_key which uses parse_guest_agent which uses 
parse_property_string
doesn't sets the default from the schema on parse, so you'd also skip if it's 
not defined
(please negative test changes too)

Rather do something like:

my $freeze = PVE::QemuServer::get_qga_key($self->{vmlist}->{$vmid}, 
'freeze-fs-on-backup') // 1;
if (!$freeze) {
# ...

> + $self->loginfo("skipping guest-agent 'fs-freeze', disabled in VM 
> options");
> + return;
> +}
> +
>  $self->loginfo("issuing guest-agent 'fs-freeze' command");
>  eval { mon_cmd($vmid, "guest-fsfreeze-freeze") };
>  $self->logerr($@) if $@;




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/3] vzdump: Add VM QGA option to skip `fs-freeze`/`fs-thaw` on backup

2023-02-23 Thread Christoph Heiss
Thanks for the review!

On Thu, Feb 23, 2023 at 11:55:55AM +0100, Thomas Lamprecht wrote:
> Am 01/02/2023 um 13:59 schrieb Christoph Heiss:
> > Signed-off-by: Christoph Heiss 
> > ---
> >  PVE/QemuServer.pm| 8 +++-
> >  PVE/VZDump/QemuServer.pm | 5 +
> >  2 files changed, 12 insertions(+), 1 deletion(-)
> >
> > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> > index e4d1a70..e409db1 100644
> > --- a/PVE/QemuServer.pm
> > +++ b/PVE/QemuServer.pm
> > @@ -160,7 +160,13 @@ my $agent_fmt = {
> > description => "Run fstrim after moving a disk or migrating the VM.",
> > type => 'boolean',
> > optional => 1,
> > -   default => 0
> > +   default => 0,
> > +},
> > +fsfreeze_thaw => {
>
> Please rename this to:
>
> 'freeze-fs-on-backup'
Will do, I wasn't really all that happy with the option name anyway.

>
>
> > +   description => "Freeze/thaw filesystems on backup for consistency.",
>
> detail/nit: s/filesystems/guest filesystem/
Ack.

>
> > +   type => 'boolean',
> > +   optional => 1,
> > +   default => 1,
> >  },
> >  type => {
> > description => "Select the agent type",
> > diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
> > index 0eb5ec6..5ff46f7 100644
> > --- a/PVE/VZDump/QemuServer.pm
> > +++ b/PVE/VZDump/QemuServer.pm
> > @@ -862,6 +862,11 @@ sub qga_fs_freeze {
> > return;
> >  }
> >
> > +if (!PVE::QemuServer::get_qga_key($self->{vmlist}->{$vmid}, 
> > 'fsfreeze_thaw')) {
>
> this is wrong as get_qga_key which uses parse_guest_agent which uses 
> parse_property_string
> doesn't sets the default from the schema on parse, so you'd also skip if it's 
> not defined
> (please negative test changes too)
Good to know! I don't remember if I tested it with an unchanged/fresh
VM, but I guess not (and it was also some time ago). I'll change it as
suggested below and do more thorough testing of such cases too for the
next spin.

>
> Rather do something like:
>
> my $freeze = PVE::QemuServer::get_qga_key($self->{vmlist}->{$vmid}, 
> 'freeze-fs-on-backup') // 1;
> if (!$freeze) {
> # ...
>
> > +   $self->loginfo("skipping guest-agent 'fs-freeze', disabled in VM 
> > options");
> > +   return;
> > +}
> > +
> >  $self->loginfo("issuing guest-agent 'fs-freeze' command");
> >  eval { mon_cmd($vmid, "guest-fsfreeze-freeze") };
> >  $self->logerr($@) if $@;
>
>


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs 3/3] qm: Add section explaining fs-freeze/thaw QGA option

2023-02-23 Thread Thomas Lamprecht
Am 01/02/2023 um 13:59 schrieb Christoph Heiss:
> Signed-off-by: Christoph Heiss 
> ---
>  qm.adoc | 20 
>  1 file changed, 20 insertions(+)
> 
> diff --git a/qm.adoc b/qm.adoc
> index 8a49283..f029ceb 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -1061,6 +1061,26 @@ operations that have the potential to write out zeros 
> to the storage:
>  
>  On a thin provisioned storage, this can help to free up unused space.
>  
> +Filesystem freeze/thaw
> +^^

Would title case this and add "on Backup", i.e.: Filesystem Freeze & Thaw on 
Backup

> +
> +By default, VM filesystems are synced via the 'fs-freeze' command when a 
> backup

I know we're already in the section, but it might help to remind user 
s/command/QEMU Guest Agent Command/

> +is performed, to provide consistency. On Windows guests, this calls into the 
> VSS
> +(volume shadow copy) subsystem.

I'd either move the last sentence to the next paragraph or also mention how it's
done in Linux, as otherwise this IMO reads as it has been put slightly out of 
place.

> +
> +There might be cases where this needs to be disabled, e.g. when running

as per our technical writing style guide (see intranet wiki) s/e.g./for example/

> +applications which hook into the Windows VSS subsystem themselves. This has
> +been observed to be necessary when running SQL Server, which otherwise might
> +break its own differential backup mode.

I'd reword this a bit, maybe something a long the lines of:

"Some applications handle consistent backups themselves by hooking into the
Windows VSS layer, a fs-freeze might then interfere with that. For example
calling fs-freeze whit some SQL Servers triggers VSS to call the SQL Writer
VSS module in a mode that breaks the SQL backup chain for differential backups.

For such setups you can configure {pve} to not issue a freeze and thaw cycle
on backup by setting the `freeze-fs-on-backup` qga option to `0`, which can
be also done via the web UI."

Below might be obsolete if (something like) above is adopted, so just commenting
two minor nits for completeness sake:

> +
> +This behavior can be turned off using the 'Freeze/thaw filesystems on backup
> +for consistency' option. With this option disabled, {pve} will skip the

s/skip/skip issuing/

> +'fs-freeze' and 'fs-thaw' commands when running backup jobs.

s/commands/QGA commands/


> +
> +NOTE: Disabling this option can potentially lead to backups with inconsistent
> +filesystems and should therefore only be changed if you know what you are

s/changed/disabled/

> +doing.
> +
>  Troubleshooting
>  ^^^
>  



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs 3/3] qm: Add section explaining fs-freeze/thaw QGA option

2023-02-23 Thread Christoph Heiss
Thanks again, I'll incorporate your suggestions below as appriopriate
for the next spin!

On Thu, Feb 23, 2023 at 12:34:22PM +0100, Thomas Lamprecht wrote:
> Am 01/02/2023 um 13:59 schrieb Christoph Heiss:
> > Signed-off-by: Christoph Heiss 
> > ---
> >  qm.adoc | 20 
> >  1 file changed, 20 insertions(+)
> >
> > diff --git a/qm.adoc b/qm.adoc
> > index 8a49283..f029ceb 100644
> > --- a/qm.adoc
> > +++ b/qm.adoc
> > @@ -1061,6 +1061,26 @@ operations that have the potential to write out 
> > zeros to the storage:
> >
> >  On a thin provisioned storage, this can help to free up unused space.
> >
> > +Filesystem freeze/thaw
> > +^^
>
> Would title case this and add "on Backup", i.e.: Filesystem Freeze & Thaw on 
> Backup
>
> > +
> > +By default, VM filesystems are synced via the 'fs-freeze' command when a 
> > backup
>
> I know we're already in the section, but it might help to remind user
> s/command/QEMU Guest Agent Command/
>
> > +is performed, to provide consistency. On Windows guests, this calls into 
> > the VSS
> > +(volume shadow copy) subsystem.
>
> I'd either move the last sentence to the next paragraph or also mention how 
> it's
> done in Linux, as otherwise this IMO reads as it has been put slightly out of 
> place.
>
> > +
> > +There might be cases where this needs to be disabled, e.g. when running
>
> as per our technical writing style guide (see intranet wiki) s/e.g./for 
> example/
>
> > +applications which hook into the Windows VSS subsystem themselves. This has
> > +been observed to be necessary when running SQL Server, which otherwise 
> > might
> > +break its own differential backup mode.
>
> I'd reword this a bit, maybe something a long the lines of:
>
> "Some applications handle consistent backups themselves by hooking into the
> Windows VSS layer, a fs-freeze might then interfere with that. For example
> calling fs-freeze whit some SQL Servers triggers VSS to call the SQL Writer
> VSS module in a mode that breaks the SQL backup chain for differential 
> backups.
>
> For such setups you can configure {pve} to not issue a freeze and thaw cycle
> on backup by setting the `freeze-fs-on-backup` qga option to `0`, which can
> be also done via the web UI."
>
> Below might be obsolete if (something like) above is adopted, so just 
> commenting
> two minor nits for completeness sake:
>
> > +
> > +This behavior can be turned off using the 'Freeze/thaw filesystems on 
> > backup
> > +for consistency' option. With this option disabled, {pve} will skip the
>
> s/skip/skip issuing/
>
> > +'fs-freeze' and 'fs-thaw' commands when running backup jobs.
>
> s/commands/QGA commands/
>
>
> > +
> > +NOTE: Disabling this option can potentially lead to backups with 
> > inconsistent
> > +filesystems and should therefore only be changed if you know what you are
>
> s/changed/disabled/
>
> > +doing.
> > +
> >  Troubleshooting
> >  ^^^
> >
>


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v4 container/manager 0/3] fix #3413: Add `Disconnect` option for LXC networks

2023-02-23 Thread Friedrich Weber
As I also missed that feature, I applied the patches to my PVE instance 
with pre-existing containers -- all interfaces stayed up as expected, 
and disconnecting/reconnecting interfaces for running and stopped 
containers via the Web UI worked nicely.


Tested-by: Friedrich Weber 

On 22/02/2023 13:49, Christoph Heiss wrote:

Add a `Disconnect` option for network interfaces on LXC containers, much
like it already exists for VMs. This has been requested in #3413 [0] and
seems useful, especially considering we already support the same thing
for VMs.

One thing to note is that LXC does not seem to support the notion of
setting an interface down. The `flags` property would suggest that this
possible [1], but AFAICS it does not work. I tried setting the value as
empty and to something else than "up" (since that is really the only
supported option [2][3]), which both had absolutely no effect.

Thus force the host-side link of the container network down and avoid
adding it to the designated bridge if the new option is set, effectively
disconnecting the container network.

The first patch is cleanup only and does not change anything regarding
functionality.

Testing
---
Testing was done by starting a LXC container (w/ and w/o `link_down`
set), checking if the interface has (or not) LOWERLAYERDOWN set inside
the container (`ip address eth0`) and if packet transit works (or not)
using a simple `ping`. Same thing after toggeling the option on the
interface. Further, the interface(s) should (or should not) be listed
in `brctl show`. Same thing was done for hotplugged interfaces to a
running container.

Also tested with `ifreload -a` (thanks Wolfgang!) thrown in, which did
nothing unexpected: If `link_down` was set, interfaces stayed in
LOWERLAYERDOWN and unplugged from the bridge, and stayed UP and plugged
into the bridge when `link_down` was unset.

[0] https://bugzilla.proxmox.com/show_bug.cgi?id=3413
[1] https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html#lbAO
[2] https://github.com/lxc/lxc/blob/08f0e769/src/lxc/confile.c#L453-L467
[3] https://github.com/lxc/lxc/blob/08f0e769/src/lxc/confile.c#L5933-L5952

v1: https://lists.proxmox.com/pipermail/pve-devel/2023-February/055762.html
v2: https://lists.proxmox.com/pipermail/pve-devel/2023-February/055795.html
v3: https://lists.proxmox.com/pipermail/pve-devel/2023-February/055839.html

pve-container:

Christoph Heiss (2):
   net: Pass network config directly to net_tap_plug()
   net: Add `link_down` config to allow setting interfaces as disconnected

  src/PVE/LXC.pm| 37 +++--
  src/PVE/LXC/Config.pm |  6 ++
  src/lxcnetaddbr   |  9 +
  3 files changed, 30 insertions(+), 22 deletions(-)

pve-manager:

Christoph Heiss (1):
   lxc: Add `Disconnect` option for network interfaces

  www/manager6/Parser.js  |  3 +++
  www/manager6/lxc/Network.js | 13 +
  2 files changed, 16 insertions(+)

--
2.39.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel





___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 qemu-server 1/3] vzdump: Add VM QGA option to skip fs-freeze/-thaw on backup

2023-02-23 Thread Christoph Heiss
Signed-off-by: Christoph Heiss 
---
Changes v1 -> v2:
 * Rename option from 'fsfreeze_thaw' to 'freeze-fs-on-backup'
 * Adapt option description as suggested
 * Fix option check in qga_fs_freeze()

 PVE/QemuServer.pm| 8 +++-
 PVE/VZDump/QemuServer.pm | 6 ++
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e4d1a70..20bdb36 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -160,7 +160,13 @@ my $agent_fmt = {
description => "Run fstrim after moving a disk or migrating the VM.",
type => 'boolean',
optional => 1,
-   default => 0
+   default => 0,
+},
+'freeze-fs-on-backup' => {
+   description => "Freeze/thaw guest filesystems on backup for 
consistency.",
+   type => 'boolean',
+   optional => 1,
+   default => 1,
 },
 type => {
description => "Select the agent type",
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 0eb5ec6..30baa46 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -862,6 +862,12 @@ sub qga_fs_freeze {
return;
 }

+my $freeze = PVE::QemuServer::get_qga_key($self->{vmlist}->{$vmid}, 
'freeze-fs-on-backup') // 1;
+if (!$freeze) {
+   $self->loginfo("skipping guest-agent 'fs-freeze', disabled in VM 
options");
+   return;
+}
+
 $self->loginfo("issuing guest-agent 'fs-freeze' command");
 eval { mon_cmd($vmid, "guest-fsfreeze-freeze") };
 $self->logerr($@) if $@;
--
2.39.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 manager 2/3] ui: qga: Add option to turn off QGA fs-freeze/-thaw on backup

2023-02-23 Thread Christoph Heiss
Signed-off-by: Christoph Heiss 
---
Changes v1 -> v2:
 * Rename option from 'fsfreeze_thaw' to 'freeze-fs-on-backup'
 * Adapt option descriptions as suggested

 www/manager6/Utils.js |  2 ++
 www/manager6/form/AgentFeatureSelector.js | 31 ++-
 2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index d8c0bf5a..e6c7861a 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -475,6 +475,8 @@ Ext.define('PVE.Utils', {
virtio: "VirtIO",
};
displayText = map[value] || Proxmox.Utils.unknownText;
+   } else if (key === 'freeze-fs-on-backup' && 
PVE.Parser.parseBoolean(value)) {
+   continue;
} else if (PVE.Parser.parseBoolean(value)) {
displayText = Proxmox.Utils.enabledText;
}
diff --git a/www/manager6/form/AgentFeatureSelector.js 
b/www/manager6/form/AgentFeatureSelector.js
index 0dcc6ecb..651b7124 100644
--- a/www/manager6/form/AgentFeatureSelector.js
+++ b/www/manager6/form/AgentFeatureSelector.js
@@ -21,6 +21,27 @@ Ext.define('PVE.form.AgentFeatureSelector', {
},
disabled: true,
},
+   {
+   xtype: 'proxmoxcheckbox',
+   boxLabel: gettext('Freeze/thaw guest filesystems on backup for 
consistency'),
+   name: 'freeze-fs-on-backup',
+   reference: 'freeze_fs_on_backup',
+   bind: {
+   disabled: '{!enabled.checked}',
+   },
+   disabled: true,
+   uncheckedValue: '0',
+   defaultValue: '1',
+   },
+   {
+   xtype: 'displayfield',
+   userCls: 'pmx-hint',
+   value: gettext('Freeze/thaw for guest filesystems disabled. '
+   + 'This can lead to inconsistent disk backups.'),
+   bind: {
+   hidden: '{freeze_fs_on_backup.checked}',
+   },
+   },
{
xtype: 'displayfield',
userCls: 'pmx-hint',
@@ -47,12 +68,20 @@ Ext.define('PVE.form.AgentFeatureSelector', {
 ],

 onGetValues: function(values) {
-   var agentstr = PVE.Parser.printPropertyString(values, 'enabled');
+   if (PVE.Parser.parseBoolean(values['freeze-fs-on-backup'])) {
+   delete values['freeze-fs-on-backup'];
+   }
+
+   let agentstr = PVE.Parser.printPropertyString(values, 'enabled');
return { agent: agentstr };
 },

 setValues: function(values) {
let res = PVE.Parser.parsePropertyString(values.agent, 'enabled');
+   if (!Ext.isDefined(res['freeze-fs-on-backup'])) {
+   res['freeze-fs-on-backup'] = 1;
+   }
+
this.callParent([res]);
 },
 });
--
2.39.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 docs 3/3] qm: Add section explaining fs-freeze/-thaw QGA option

2023-02-23 Thread Christoph Heiss
Signed-off-by: Christoph Heiss 
---
Changes v1 -> v2:
 * Incorporate suggestions made by Thomas

 qm.adoc | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index 8a49283..93ec29d 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1061,6 +1061,28 @@ operations that have the potential to write out zeros to 
the storage:

 On a thin provisioned storage, this can help to free up unused space.

+Filesystem Freeze & Thaw on Backup
+^^
+
+By default, guest filesystems are synced via the 'fs-freeze' QEMU Guest Agent
+Command when a backup is performed, to provide consistency.
+
+On Windows guests, some applications might handle consistent backups themselves
+by hooking into the Windows VSS (Volume Shadow Copy Service) layer, a
+'fs-freeze' then might interfere with that. For example, it has been observed
+that calling 'fs-freeze' with some SQL Servers triggers VSS to call the SQL
+Writer VSS module in a mode that breaks the SQL Server backup chain for
+differential backups.
+
+For such setups you can configure {pve} to not issue a freeze-and-thaw cycle on
+backup by setting the `freeze-fs-on-backup` QGA option to `0`. This can also be
+done via the GUI with the 'Freeze/thaw guest filesystems on backup for
+consistency' option.
+
+NOTE: Disabling this option can potentially lead to backups with inconsistent
+filesystems and should therefore only be disabled if you know what you are
+doing.
+
 Troubleshooting
 ^^^

--
2.39.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 qemu-server/manager/docs 0/3] fix #4140: Add option to disable QGA fs-freeze/-thaw on backup

2023-02-23 Thread Christoph Heiss
TL;DR: On Windows guests, the QEMU guest agent calls into the VSS
(Volume Shadow Copy Service) subsystem, which has potential to mess up
applications which hook into this themselves like SQL Server, as
described in the bug report [0].

Building upon the proposal in that report, add an (default-enabled)
option to the QGA configuration to disable the behavior manually if
really needed. Due to its potential to wreak havoc, show the user a
warning when disabled.

[0] https://bugzilla.proxmox.com/show_bug.cgi?id=4140

v1: https://lists.proxmox.com/pipermail/pve-devel/2023-February/055656.html

Christoph Heiss (1):
  vzdump: Add VM QGA option to skip fs-freeze/-thaw on backup

 PVE/QemuServer.pm| 8 +++-
 PVE/VZDump/QemuServer.pm | 6 ++
 2 files changed, 13 insertions(+), 1 deletion(-)

Christoph Heiss (1):
  ui: qga: Add option to turn off QGA fs-freeze/-thaw on backup

 www/manager6/Utils.js |  2 ++
 www/manager6/form/AgentFeatureSelector.js | 31 ++-
 2 files changed, 32 insertions(+), 1 deletion(-)

Christoph Heiss (1):
  qm: Add section explaining fs-freeze/-thaw QGA option

 qm.adoc | 22 ++
 1 file changed, 22 insertions(+)

--
2.39.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH-SERIES swtpm/qemu-server] Improve swtpm logging

2023-02-23 Thread Thomas Lamprecht
Am 18/01/2023 um 13:21 schrieb Fiona Ebner:
> There was a recent failure when migrating a production VM
>> kvm: tpm-emulator: Setting the stateblob (type 1) failed with a TPM error 
>> 0x3 a parameter is bad
>> kvm: error while loading state for instance 0x0 of device 'tpm-emulator'
>> kvm: load of migration failed: Input/output error
> and it's not clear what exactly triggered it.
> 
> Start collecting logs in /run/qemu-server/-swtpm.log to have some
> information available for any future swtpm failures.
> 
> Also add a few more logs in swtpm itself, to better see what exactly
> triggers the TPM_BAD_PARAMETER (0x3) error in case such a migration
> failure happens again.
> 
> 
> swtpm:
> 
> Fiona Ebner (1):
>   control channel: add error logs upon receiving short input
> 
>  src/swtpm/ctrlchannel.c | 17 +++--
>  1 file changed, 15 insertions(+), 2 deletions(-)
> 
> 
> qemu-server:
> 
> Fiona Ebner (2):
>   swtpm: enable logging
>   swtpm: use start time as prefix for logging
> 
>  PVE/QemuServer.pm | 5 +
>  1 file changed, 5 insertions(+)
> 


applied all three patches, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] pvereport: add `date -R` to general system info section

2023-02-23 Thread Lukas Wagner
Sometimes it can be quite useful to know when exactly a system report
was generated. Adds the following output quite prominently in the
general system info section:

 # date -R
 Thu, 23 Feb 2023 16:21:12 +0100

Signed-off-by: Lukas Wagner 
---
 PVE/Report.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/Report.pm b/PVE/Report.pm
index d363c15b..c9109dca 100644
--- a/PVE/Report.pm
+++ b/PVE/Report.pm
@@ -29,6 +29,7 @@ my $init_report_cmds = sub {
order => 10,
cmds => [
'hostname',
+   'date -R',
'pveversion --verbose',
'cat /etc/hosts',
'pvesubscription get',
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v2 qemu-server] api: ignore --efitype parameter when creating efidisk for ARM VM

2023-02-23 Thread Thomas Lamprecht
Am 30/01/2023 um 13:57 schrieb Matthias Heiserer:
> Required because there's one single efi for ARM, and
> the 2m/4m difference doesn't seem to apply.
> 
> Signed-off-by: Matthias Heiserer 
> ---
> 
> Changes from v1:
> Rather than change the efi type in the GUI, ignore it
> in the API
> 
>  PVE/QemuServer.pm   | 2 +-
>  PVE/QemuServer/Drive.pm | 3 ++-
>  2 files changed, 3 insertions(+), 2 deletions(-)
> 
>

applied, thanks!

But subject slightly reworded as it wasn't really in the api-specific part and
having the efitype parameter include some CLI option hyphen included is not
really useful when reading the git log. Also moved the description up to the
actual efitype format string schema, fits better there as in the whole efidisk
one - especially as this is only relevant for an unofficial niche architecture.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v2 qemu-server 1/3] vzdump: Add VM QGA option to skip fs-freeze/-thaw on backup

2023-02-23 Thread Thomas Lamprecht
Am 23/02/2023 um 15:18 schrieb Christoph Heiss:
> Signed-off-by: Christoph Heiss 
> ---
> Changes v1 -> v2:
>  * Rename option from 'fsfreeze_thaw' to 'freeze-fs-on-backup'
>  * Adapt option description as suggested
>  * Fix option check in qga_fs_freeze()
> 
>  PVE/QemuServer.pm| 8 +++-
>  PVE/VZDump/QemuServer.pm | 6 ++
>  2 files changed, 13 insertions(+), 1 deletion(-)
> 
>

applied, thanks!

FWIW, I forgot to write in my review of v1 that having the disabled-in-config
check done before the is-qga-running one would have felt _slightly_ more 
sensible
to me, as no use in checking for availability of a service if one isn't going
to use it anyway - but, this is only a small nit and could be easily addressed
anytime - so mostly mentioning it to get it off my mind ;-)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] vmconfig_update_net: mtu is not hotpluggable

2023-02-23 Thread Thomas Lamprecht
Am 26/11/2022 um 08:20 schrieb Alexandre Derumier:
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/QemuServer.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] pvereport: add `date -R` to general system info section

2023-02-23 Thread Stefan Sterz
On 2/23/23 16:25, Lukas Wagner wrote:
> Sometimes it can be quite useful to know when exactly a system report
> was generated. Adds the following output quite prominently in the
> general system info section:
>

While I agree that this can be useful sometimes, just as a heads-up,
iptables-save provides that info already:

# Generated by iptables-save v1.8.7 on Thu Feb 23 16:43:26 2023

However, we may not want to rely on that.

>  # date -R
>  Thu, 23 Feb 2023 16:21:12 +0100
> 
> Signed-off-by: Lukas Wagner 
> ---
>  PVE/Report.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/PVE/Report.pm b/PVE/Report.pm
> index d363c15b..c9109dca 100644
> --- a/PVE/Report.pm
> +++ b/PVE/Report.pm
> @@ -29,6 +29,7 @@ my $init_report_cmds = sub {
>   order => 10,
>   cmds => [
>   'hostname',
> + 'date -R',
>   'pveversion --verbose',
>   'cat /etc/hosts',
>   'pvesubscription get',



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] pvereport: add `date -R` to general system info section

2023-02-23 Thread Lukas Wagner

On 2/23/23 16:54, Stefan Sterz wrote:

While I agree that this can be useful sometimes, just as a heads-up,
iptables-save provides that info already:

# Generated by iptables-save v1.8.7 on Thu Feb 23 16:43:26 2023

However, we may not want to rely on that.



Thanks for the feedback!
I am aware of the iptables output, I just thought it would be nice to have the
"tuple" (hostname, date) right at the start. Additionally, I have submitted 
patches
for PBS and PMG as well, to have the same info for all three products.

--
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v2 qemu-server] fix #4249: make qemu_img_convert respect bwlimit

2023-02-23 Thread Thomas Lamprecht
Am 17/11/2022 um 14:18 schrieb Leo Nunner:
> Previously, cloning a stopped VM didn't respect bwlimit. Passing the -r
> (ratelimit) parameter to qemu-img convert fixes this issue.
> 
> Signed-off-by: Leo Nunner 
> ---
> Changes from v1:
> - Remove unneeded "undef"s, as to not unnecessarily touch unrelated
>   lines
> - Add test for bwlimit
> 
>  PVE/QemuServer.pm  |  5 ++--
>  test/run_qemu_img_convert_tests.pl | 43 ++
>  2 files changed, 29 insertions(+), 19 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH container] lxc start: warn in case of conflicting lxc.idmap entries

2023-02-23 Thread Friedrich Weber
Users can customize the mapping between host and container uids/gids
by providing `lxc.idmap` entries in the container config. The syntax
is described in lxc.container.conf(5). One source of errors are
conflicting entries for one or more uid/gids. An example:

...
lxc.idmap: u 0 10 65536
lxc.idmap: u 1000 1000 10
...

Assuming `root:1000:10` is correctly added to /etc/subuid, starting
the container fails with an error that is hard to interpret:

lxc_map_ids: 3701 newuidmap failed to write mapping
"newuidmap: write to uid_map failed: Invalid argument":
newuidmap 67993 0 10 65536 1000 1000 10

In order to simplify troubleshooting, validate the mapping before
starting the container and print a warning if conflicting uid/gid
mappings are detected. For the above mapping:

lxc.idmap: invalid map entry 'u 1000 1000 10': container uid 1000
is already mapped by entry 'u 0 10 65536'

The warning appears in the task log and in the output of `pct start`.

A failed validation check only prints a warning instead of erroring
out, to make sure potentially buggy (or outdated) validation code does
not prevent containers from starting.

The validation algorithm is quite straightforward and has quadratic
runtime in the worst case. The kernel allows at most 340 lines in
uid_map (see user_namespaces(7)), which the algorithm should be able
to handle in well under 0.5s, but to be safe, skip validation for more
than 100 entries.

Note that validation does not take /etc/sub{uid,gid} into account,
which, if misconfigured, could still prevent the container from
starting with an error like

"newuidmap: uid range [1000-1010) -> [1000-1010) not allowed"

If needed, validating /etc/sub{uid,gid} could be added in the future.

Signed-off-by: Friedrich Weber 
---
 The warning could of course be even more detailed, e.g., "container uid range
 [1000...1009] is already mapped to [101000...101009] by entry 'u 0 10
 65536'". But this would require a more sophisticated algorithm, and I'm not
 sure if the added complexity is worth it -- happy to add it if wanted, though.

 src/PVE/LXC.pm  |  97 ++
 src/test/Makefile   |   5 +-
 src/test/idmap-test.pm  | 197 
 src/test/run_idmap_tests.pl |  10 ++
 4 files changed, 308 insertions(+), 1 deletion(-)
 create mode 100644 src/test/idmap-test.pm
 create mode 100755 src/test/run_idmap_tests.pl

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 54ee0d9..141f195 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -2313,6 +2313,93 @@ sub parse_id_maps {
 return ($id_map, $rootuid, $rootgid);
 }
 
+# parse custom id mapping and throw a human-readable error if any
+# host/container uid/gid is mapped twice. note that the algorithm has quadratic
+# worst-case runtime, which may be problematic with >1000 mapping entries.
+# we're unlikely to exceed this because the kernel only allows 340 entries (see
+# user_namespaces(7)), but if we do, this could be implemented more efficiently
+# (see e.g. "interval trees").
+sub validate_id_maps {
+my ($id_map) = @_;
+
+# keep track of already-mapped uids/gid ranges on the container and host
+# sides. each range is an array [mapped_entry, first_id, last_id], where
+# mapped_entry is the original config entry (stored for generating error
+# messages), and [first_id, last_id] is an (inclusive) interval of mapped
+# ids. Ranges are sorted ascendingly by first_id for more efficient
+# traversal, and they must not overlap (this would be an invalid mapping).
+my $sides_ranges = {
+   host => { u => [], g => [] },
+   container => { u => [], g => [] },
+};
+
+# try to update the mapping with the requested range.
+# return (1, undef, undef) on success and (0, $mapped_entry, $id) on 
failure,
+# where $mapped_entry is the conflicting map entry that already maps $id.
+my $map_range = sub {
+   my ($requested_entry, $mapped_ranges, $requested_first, 
$requested_count) = @_;
+   my $requested_last = $requested_first + $requested_count - 1;
+
+   # fallback case: insert the requested range at the end
+   my $mapped_length = scalar(@$mapped_ranges);
+   my $insert_before = $mapped_length;
+
+   foreach my $i (0 .. $mapped_length - 1) {
+   my ($mapped_entry, $mapped_first, $mapped_last) = 
@$mapped_ranges[$i]->@*;
+
+   if ($requested_last < $mapped_first) {
+   # mapped:  [---]
+   # requested: [---]
+   # as the ranges are sorted, the requested range cannot overlap
+   # with any subsequent mapped range, so we can stop iterating
+   $insert_before = $i;
+   last;
+   }
+
+   # two shapes of overlapping ranges are possible.
+   # mapped:  [---]
+   # requested: [...
+   #^- conflicting id
+   return (0, $mapped_entry, $

Re: [pve-devel] [PATCH v4 qemu-server 15/16] memory: virtio-mem : implement redispatch retry.

2023-02-23 Thread Fiona Ebner
Am 23.02.23 um 16:01 schrieb DERUMIER, Alexandre:
> 
>>> +    die "No more available blocks in virtiomem to balance all
>>> requested memory\n"
>>> +   if $target_total < 0;
>>
>> I fee like this message is a bit confusing. This can only happen on
>> unplug, right?
> yes,unplug only. (when guest os set memory block as unmovable).
> I don't see case where it doesn't work on hotplug.
> 
>>  And reading that "no more blocks are available" sounds
>> like a paradox then. It's rather that no more blocks can be
>> unplugged.
>>
>> If we really want to, if the $target_total is negative, we could set
>> it
>> to 0 (best to do it at the call-side already) and try to unplug
>> everything else? We won't reach the goal anymore, but we could still
>> get
>> closer to it in some cases.
> 
> yes, good idea
> 
>>  Would need a bit more adaptation to avoid an
>> endless loop: we also need to stop if all devices reached their
>> current
>> goal this round (and no new errors appeared), e.g.
>> balance_virtiomem()
>> could just have that info as its return value.
>>
> 
> I think we should still die with an error message in this case,(maybe
> at the end of balance_virtiomem). We still have some memory not removed
> in all cases if target_memory was < 0;
> 

Sure, we still should die at the end.

> something like:
> 
> 
> my sub balance_virtiomem {
> my ($vmid, $virtiomems, $blocksize, $target_total) = @_;
> 
> my $nb_virtiomem = scalar(grep { !$_->{error} } values $virtiomems-
>> %*);
> 
> print"try to balance memory on $nb_virtiomem virtiomems\n";
> 
> my $target_total_err = undef;
> if($target_total < 0) {
> $target_total = 0;
> $target_total_err = 1;
> }
> 
> ...
> while ($total_finished != $nb_virtiomem) {
>...
>  }
> ...
> 
> die "No more virtiomem devices left to try to balance the remaining
> memory\n"
> if $target_total_err;
> }
> 
> 

Yes, like this we make one final round, which is good enough IMHO.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] clone: remove outdated TODO about bandwidth limit

2023-02-23 Thread Fiona Ebner
Respecting bandwidth limit for offline clone was implemented by commit
56d16f16 ("fix #4249: make image clone or conversion respect bandwidth
limit"). It's still not respected for EFI disks, but those are small,
so just ignore it.

Signed-off-by: Fiona Ebner 
---

Would've applied it directly, but am not 100% sure about the EFI disk
- technically, we'd need to apply the limit there too, but seems not
really worth the effort (qemu-img dd doesn't have the option) - so
sending it for discussion.

 PVE/QemuServer.pm | 1 -
 1 file changed, 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index abb6221e..40be44db 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -8087,7 +8087,6 @@ sub clone_disk {
qemu_drive_mirror($vmid, $src_drivename, $newvolid, $newvmid, 
$sparseinit, $jobs,
$completion, $qga, $bwlimit);
} else {
-   # TODO: handle bwlimits
if ($dst_drivename eq 'efidisk0') {
# the relevant data on the efidisk may be smaller than the 
source
# e.g. on RBD/ZFS, so we use dd to copy only the amount
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] clone: remove outdated TODO about bandwidth limit

2023-02-23 Thread Thomas Lamprecht
Am 24/02/2023 um 08:37 schrieb Fiona Ebner:
> Respecting bandwidth limit for offline clone was implemented by commit
> 56d16f16 ("fix #4249: make image clone or conversion respect bandwidth
> limit"). It's still not respected for EFI disks, but those are small,
> so just ignore it.
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> Would've applied it directly, but am not 100% sure about the EFI disk
> - technically, we'd need to apply the limit there too, but seems not
> really worth the effort (qemu-img dd doesn't have the option) - so
> sending it for discussion.
> 
>  PVE/QemuServer.pm | 1 -
>  1 file changed, 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel