[pve-devel] [PATCH qemu-server] cloudinit: fix 'pending' api endpoint

2023-05-11 Thread Leo Nunner
This patch partially reverts commit 1b5706cd168fedc5e494e24300069ee4ff25761f,
by reintroducing the old format for return values (key, value, pending,
delete), but drops the "force-delete" return value. Right now, this
endpoint does not conform to its own format, because the return values
are as follows:

{
key => {
old => 'foo',
new => 'bar',
},
[…]
}

While the format specified is

[
{
key => 'baz',
old => 'foo',
new => 'bar',
},
[…]
]

This leads to the endpoint being broken when used through 'qm' and
'pvesh'. Using the API works fine, because the format doesn't get
verified there. Reverting this change brings the advantage that we can
also use PVE::GuestHelpers::format_pending when calling the endpoint
through qm again.

Signed-off-by: Leo Nunner 
---
I'm not sure whether or not this constitutes a breaking change. We are
returning to the old format for this endpoint, and up until now it was
broken anyway (well, for the CLI that is).

 PVE/API2/Qemu.pm | 48 ++--
 1 file changed, 34 insertions(+), 14 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 587bb22..dd52fdc 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1344,16 +1344,23 @@ __PACKAGE__->register_method({
description => "Configuration option name.",
type => 'string',
},
-   old => {
+   value => {
description => "Value as it was used to generate the 
current cloudinit image.",
type => 'string',
optional => 1,
},
-   new => {
+   pending => {
description => "The new pending value.",
type => 'string',
optional => 1,
},
+   delete => {
+   description => "Indicates a pending delete request if 
present and not 0. ",
+   type => 'integer',
+   minimum => 0,
+   maximum => 1,
+   optional => 1,
+   },
},
},
 },
@@ -1365,26 +1372,39 @@ __PACKAGE__->register_method({
 
my $ci = $conf->{cloudinit};
 
-   my $res = {};
+   $conf->{cipassword} = '**' if exists $conf->{cipassword};
+   $ci->{cipassword} = '**' if exists $ci->{cipassword};
+
+   my $res = [];
+
+   # All the values that got added
my $added = delete($ci->{added}) // '';
for my $key (PVE::Tools::split_list($added)) {
-   $res->{$key} = { new => $conf->{$key} };
+   push @$res, { key => $key, pending => $conf->{$key} };
}
 
-   for my $key (keys %$ci) {
-   if (!exists($conf->{$key})) {
-   $res->{$key} = { old => $ci->{$key} };
+   # All already existing values (+ their new value, if it exists)
+   for my $opt (keys %$cloudinitoptions) {
+   next if !$conf->{$opt};
+   next if $added =~ m/$opt/;
+   my $item = {
+   key => $opt,
+   };
+
+   if (my $pending = $ci->{$opt}) {
+   $item->{value} = $pending;
+   $item->{pending} = $conf->{$opt};
} else {
-   $res->{$key} = {
-   old => $ci->{$key},
-   new => $conf->{$key},
-   };
+   $item->{value} = $conf->{$opt},
}
+
+   push @$res, $item;
}
 
-   if (defined(my $pw = $res->{cipassword})) {
-   $pw->{old} = '**' if exists $pw->{old};
-   $pw->{new} = '**' if exists $pw->{new};
+   # Now, we'll find the deleted ones
+   for my $opt (keys %$ci) {
+   next if $conf->{$opt};
+   push @$res, { key => $opt, delete => 1 };
}
 
return $res;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-container 0/1] Proposal for adding zfs dataset mounting possibility

2023-05-11 Thread Konstantin Filippov via pve-devel
--- Begin Message ---
No, this dataset not added as “dir” - it’s mounted as ZFS filesystem inside 
container so it’s not a file. And about proxmox provided zfs backend - I 
understand that it’s better to just use existing option, but there is use cases 
where existing mechanism isn’t usable - for example when you need to mount 
dataset inside container only, to make files on it visible only inside 
container.

Best regards,
Konstantin

> 10 мая 2023 г., в 12:52, Roland  написал(а):
> 
> what about adding zfs datasets as a general type of storage?
> 
> currently, you need to create a dataset manually and add that as type
> "dir" to proxmox, to be able to use file backed instead of zvol backed VMs
> 
> that feels ugly.
> 
>> Am 10.05.23 um 02:08 schrieb Konstantin Filippov:
>> As we know, ProxMox have only three possible "categories" of mount points: 
>> ProxMox storage provider supplied, block device and bind mount. I've 
>> prepared a little patch for pve-container package which adds a fourth 
>> "category" named "zfs" - so with this patch it's possible to add such ZFS 
>> dataset into container config in a form "mpN: /> path>,". This new type can be useful in some cases - for 
>> instance when we need to mount ZFS dataset in the container but need to keep 
>> this dataset not mounted on the host.
>> 
>> Konstantin Filippov (1):
>>   Adding new mount point type named 'zfs' to let configure a ZFS dataset
>> as mount point for LXC container
>> 
>>  src/PVE/LXC.pm| 4 
>>  src/PVE/LXC/Config.pm | 3 ++-
>>  2 files changed, 6 insertions(+), 1 deletion(-)
>> 
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-container 1/1] Adding new mount point type named 'zfs' to let configure a ZFS dataset as mount point for LXC container

2023-05-11 Thread Fabian Grünbichler
> As we know, ProxMox have only three possible "categories" of mount points: 
> ProxMox storage provider supplied, block device and bind mount. I've prepared 
> a little patch for pve-container package which adds a fourth "category" named 
> "zfs"  - so with this patch it's possible to add such ZFS dataset into 
> container config in a form "mpN: /, path>". This new type can be useful in some cases - for instance when we need 
> to mount ZFS dataset in the container but need to keep this dataset not 
> mounted on the host.

nit: for single patches, there is no need to add a coverletter. also, please 
include relevant information in the commit message!

introducing a new mountpoint type is definitely not the right approach. could 
you give a reason why you want to hide the container contents from the host?

this could be implemented in pve-container (e.g., by mounting the ZFS dataset 
corresponding to a PVE-managed volume like we mount block devices or raw 
images, instead of relying on the fact that they are already mounted and bind 
mounting them.. we already do the same for ZFS snapshots in __mountpoint_mount, 
for example) and in pve-storage (e.g., by having a flag there that controls 
mounting, or skipping mounting if mountpoint=none or legacy) without the need 
for any other special handling. careful checks to see whether we rely on 
ZFS-backed mountpoints already being mounted anywhere else would still be 
needed (move volume might be one place, for example).

> Konstantin Filippov via pve-devel  hat am 
> 10.05.2023 02:08 CEST geschrieben:
> Signed-off-by: Konstantin Filippov 
> ---
>  src/PVE/LXC.pm| 4 
>  src/PVE/LXC/Config.pm | 3 ++-
>  2 files changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
> index d138161..30cf48d 100644
> --- a/src/PVE/LXC.pm
> +++ b/src/PVE/LXC.pm
> @@ -1839,6 +1839,10 @@ sub __mountpoint_mount {
>   my ($devpath) = (Cwd::realpath($volid) =~ /^(.*)$/s); # realpath() 
> taints
>   PVE::Tools::run_command(['mount', @extra_opts, $volid, $mount_path]) if 
> $mount_path;
>   return wantarray ? ($volid, 0, $devpath) : $volid;
> +} elsif ($type eq 'zfs') {
> + push @extra_opts, '-o', 'ro' if $readonly;
> + PVE::Tools::run_command(['mount.zfs', @extra_opts, $volid, 
> $mount_path]) if $mount_path;
> + return wantarray ? ($volid, 0, undef) : $volid
>  } elsif ($type eq 'bind') {
>   die "directory '$volid' does not exist\n" if ! -d $volid;
>   bindmount($volid, $parentfd, $last_dir//$rootdir, $mount_path, 
> $readonly, @extra_opts) if $mount_path;
> diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
> index ac9db94..056ec98 100644
> --- a/src/PVE/LXC/Config.pm
> +++ b/src/PVE/LXC/Config.pm
> @@ -1557,7 +1557,8 @@ sub classify_mountpoint {
>   return 'device' if $vol =~ m!^/dev/!;
>   return 'bind';
>  }
> -return 'volume';
> +return 'volume' if $vol =~ m!:.*(vm|subvol)-[0-9]*-disk-[0-9]*!;
> +return 'zfs';
>  }
>  
>  my $__is_volume_in_use = sub {
> -- 
> 2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager 0/2] ui: fw: allow selecting network interface for rules using combogrid

2023-05-11 Thread Christoph Heiss
For nodes, VMs and CTs we can show the user a list of available network
interfaces (as that information is available) when creating a new
firewall rule, much like it is already done in similar places.
Adds a lot of convenience when creating new firewall rules if they are
interface-specific, as you get a nice summary of the available ones and
can simply select it instead of typing it out each time.

The first patch refactors the `BridgeSelector` component a bit into a
new `NetworkInterfaceSelector`, is essence allowing it be used for any
type of network interfaces. No functional changes there.

The second patch contains the actual implementation, using the
`NetworkInterfaceSelector` from above for nodes and introducing a new
component (which is mostly based of the former) for VMs/CTs.
For datacenter rules, the simple textbox is kept.

pve-manager:

Christoph Heiss (2):
  ui: fw: generalize `BridgeSelector` into network interface selector
  ui: fw: allow selecting network interface for rules using combogrid

 www/manager6/Makefile |  3 +-
 www/manager6/form/BridgeSelector.js   | 71 -
 www/manager6/form/NetworkInterfaceSelector.js | 79 +++
 .../form/VMNetworkInterfaceSelector.js| 79 +++
 www/manager6/grid/FirewallRules.js| 37 -
 www/manager6/lxc/Config.js|  1 +
 www/manager6/lxc/Network.js   |  3 +-
 www/manager6/qemu/Config.js   |  1 +
 www/manager6/qemu/NetworkEdit.js  |  3 +-
 9 files changed, 199 insertions(+), 78 deletions(-)

--
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager 1/2] ui: fw: generalize `BridgeSelector` into network interface selector

2023-05-11 Thread Christoph Heiss
This makes it optional to specify a specific type of bridge/network and
renames the component to `NetworkInterfaceSelector`, to better fit it's
new role.

Allows reusing the component in other places, where the user should be
presented a choice of e.g. all available network interfaces on a node.

No functional changes.

Signed-off-by: Christoph Heiss 
---
 www/manager6/Makefile |  2 +-
 www/manager6/form/BridgeSelector.js   | 71 -
 www/manager6/form/NetworkInterfaceSelector.js | 79 +++
 www/manager6/lxc/Network.js   |  3 +-
 www/manager6/qemu/NetworkEdit.js  |  3 +-
 5 files changed, 84 insertions(+), 74 deletions(-)
 delete mode 100644 www/manager6/form/BridgeSelector.js
 create mode 100644 www/manager6/form/NetworkInterfaceSelector.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 2b577c8e..a2f5116c 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -20,7 +20,6 @@ JSSRC=
\
form/AgentFeatureSelector.js\
form/BackupModeSelector.js  \
form/BandwidthSelector.js   \
-   form/BridgeSelector.js  \
form/BusTypeSelector.js \
form/CPUModelSelector.js\
form/CacheTypeSelector.js   \
@@ -47,6 +46,7 @@ JSSRC=
\
form/MDevSelector.js\
form/MemoryField.js \
form/NetworkCardSelector.js \
+   form/NetworkInterfaceSelector.js\
form/NodeSelector.js\
form/PCISelector.js \
form/PermPathSelector.js\
diff --git a/www/manager6/form/BridgeSelector.js 
b/www/manager6/form/BridgeSelector.js
deleted file mode 100644
index 350588cd..
--- a/www/manager6/form/BridgeSelector.js
+++ /dev/null
@@ -1,71 +0,0 @@
-Ext.define('PVE.form.BridgeSelector', {
-extend: 'Proxmox.form.ComboGrid',
-alias: ['widget.PVE.form.BridgeSelector'],
-
-bridgeType: 'any_bridge', // bridge, OVSBridge or any_bridge
-
-store: {
-   fields: ['iface', 'active', 'type'],
-   filterOnLoad: true,
-   sorters: [
-   {
-   property: 'iface',
-   direction: 'ASC',
-   },
-   ],
-},
-valueField: 'iface',
-displayField: 'iface',
-listConfig: {
-   columns: [
-   {
-   header: gettext('Bridge'),
-   dataIndex: 'iface',
-   hideable: false,
-   width: 100,
-   },
-   {
-   header: gettext('Active'),
-   width: 60,
-   dataIndex: 'active',
-   renderer: Proxmox.Utils.format_boolean,
-   },
-   {
-   header: gettext('Comment'),
-   dataIndex: 'comments',
-   renderer: Ext.String.htmlEncode,
-   flex: 1,
-   },
-   ],
-},
-
-setNodename: function(nodename) {
-   var me = this;
-
-   if (!nodename || me.nodename === nodename) {
-   return;
-   }
-
-   me.nodename = nodename;
-
-   me.store.setProxy({
-   type: 'proxmox',
-   url: '/api2/json/nodes/' + me.nodename + '/network?type=' +
-   me.bridgeType,
-   });
-
-   me.store.load();
-},
-
-initComponent: function() {
-   var me = this;
-
-   var nodename = me.nodename;
-   me.nodename = undefined;
-
-me.callParent();
-
-   me.setNodename(nodename);
-},
-});
-
diff --git a/www/manager6/form/NetworkInterfaceSelector.js 
b/www/manager6/form/NetworkInterfaceSelector.js
new file mode 100644
index ..4c59b73e
--- /dev/null
+++ b/www/manager6/form/NetworkInterfaceSelector.js
@@ -0,0 +1,79 @@
+Ext.define('PVE.form.NetworkInterfaceSelector', {
+extend: 'Proxmox.form.ComboGrid',
+alias: ['widget.PVE.form.NetworkInterfaceSelector'],
+
+// Any of 'bridge, bond, eth, alias, vlan, OVSBridge, OVSBond, OVSPort, 
OVSIntPort, any_bridge'
+// By default, all network interfaces are shown
+networkType: undefined,
+
+store: {
+   fields: ['iface', 'active', 'type'],
+   filterOnLoad: true,
+   sorters: [
+   {
+   property: 'iface',
+   direction: 'ASC',
+   },
+   ],
+},
+valueField: 'iface',
+displayField: 'iface',
+
+setNodename: function(nodename) {
+   var me = this;
+
+   if (!nodename || me.nodename === nodename) {
+   return;
+   }
+
+   me.nodename = nodename;
+
+   const type = me.networkType ? `?type=${me.networkType}` : '';
+
+   me.store.setProxy({
+

[pve-devel] [PATCH manager 2/2] ui: fw: allow selecting network interface for rules using combogrid

2023-05-11 Thread Christoph Heiss
For nodes, VMs and CTs we can show the user a list of available network
interfaces (as that information is available) when creating a new
firewall rule, much like it is already done in similar places.
Adds a lot of convenience when creating new firewall rules if they are
interface-specific, as you get a nice summary of the available ones and
can simply select it instead of typing it out each time.

Nodes can use the new `NetworkInterfaceSelector`, for VMs and CTs a new
component is needed, as the VM/CT config needs to be parsed
appropriately. It's mostly modeled after the `NetworkInterfaceSelector`
component and pretty straight-forward.
For datacenter rules, the simple textbox is kept.

Signed-off-by: Christoph Heiss 
---
Note: iptables(8) allows two wildcards for the interface, `!` and `+`.
For VMs and CTs this cannot be specified currently anyway, as the API
only allows /^net\d+$/. For nodes, since they accept any arbritrary
string as interface name, this possibility to specify a wildcard for the
interface gets essentially lost.

I guess we could still allow users to input any strings if they want -
is that something that should be possible (using the GUI)? IOW, do we
want to allow that?

 www/manager6/Makefile |  1 +
 .../form/VMNetworkInterfaceSelector.js| 79 +++
 www/manager6/grid/FirewallRules.js| 37 -
 www/manager6/lxc/Config.js|  1 +
 www/manager6/qemu/Config.js   |  1 +
 5 files changed, 115 insertions(+), 4 deletions(-)
 create mode 100644 www/manager6/form/VMNetworkInterfaceSelector.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index a2f5116c..57ba331b 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -71,6 +71,7 @@ JSSRC=
\
form/UserSelector.js\
form/VLanField.js   \
form/VMCPUFlagSelector.js   \
+   form/VMNetworkInterfaceSelector.js  \
form/VMSelector.js  \
form/VNCKeyboardSelector.js \
form/ViewSelector.js\
diff --git a/www/manager6/form/VMNetworkInterfaceSelector.js 
b/www/manager6/form/VMNetworkInterfaceSelector.js
new file mode 100644
index ..fbe631ba
--- /dev/null
+++ b/www/manager6/form/VMNetworkInterfaceSelector.js
@@ -0,0 +1,79 @@
+Ext.define('PVE.form.VMNetworkInterfaceSelector', {
+extend: 'Proxmox.form.ComboGrid',
+alias: 'widget.PVE.form.VMNetworkInterfaceSelector',
+mixins: ['Proxmox.Mixin.CBind'],
+
+cbindData: (initialConfig) => ({
+   isQemu: initialConfig.pveSelNode.data.type === 'qemu',
+}),
+
+displayField: 'id',
+
+store: {
+   fields: ['id', 'name', 'bridge', 'ip'],
+   filterOnLoad: true,
+   sorters: {
+   property: 'id',
+   direction: 'ASC',
+   },
+},
+
+listConfig: {
+   cbind: {},
+   columns: [
+   {
+   header: 'ID',
+   dataIndex: 'id',
+   hideable: false,
+   width: 80,
+   },
+   {
+   header: gettext('Name'),
+   dataIndex: 'name',
+   flex: 1,
+   cbind: {
+   hidden: '{isQemu}',
+   },
+   },
+   {
+   header: gettext('Bridge'),
+   dataIndex: 'bridge',
+   flex: 1,
+   },
+   {
+   header: gettext('IP address'),
+   dataIndex: 'ip',
+   flex: 1,
+   cbind: {
+   hidden: '{isQemu}',
+   },
+   },
+   ],
+},
+
+initComponent: function() {
+   const { node: nodename, type, vmid } = this.pveSelNode.data;
+
+   Proxmox.Utils.API2Request({
+   url: `/nodes/${nodename}/${type}/${vmid}/config`,
+   method: 'GET',
+   success: ({ result: { data } }) => {
+   let networks = [];
+   for (const [id, value] of Object.entries(data)) {
+   if (id.match(/^net\d+/)) {
+   const parsed = type === 'lxc'
+   ? PVE.Parser.parseLxcNetwork(value)
+   : PVE.Parser.parseQemuNetwork(id, value);
+
+   networks.push({ ...parsed, id });
+   }
+   }
+
+   this.store.loadData(networks);
+   },
+   });
+
+   this.callParent();
+},
+});
+
diff --git a/www/manager6/grid/FirewallRules.js 
b/www/manager6/grid/FirewallRules.js
index 5777c7f4..9085bd64 100644
--- a/www/manager6/grid/FirewallRules.js
+++ b/www/manager6/grid/FirewallRules.js
@@ -153,6 +153,7 @@ Ext.define('PVE.FirewallRulePanel', {
 allow_iface: false,

 list_refs_url: undefined,
+pveSelNode: undefined,

  

[pve-devel] [PATCH RFC container manager] Introduce cloud-init support for LXC

2023-05-11 Thread Leo Nunner
This series introduces basic cloudinit support for containers. All in
all, it works quite similar to VMs, with the caveat that we only allow
network configuration through the alrady existing systems, and not via
cloud-init.

These patches should still be seen as WIP, but they are in a workable
state and I'd like some feedback on how I currently handle things. Are
there any other parameters/features that are needed here? Is the current
mechanism for providing the configuration to the container optimal, or
is there a better way?

container:

Leo Nunner (3):
  cloudinit: introduce config parameters
  cloudinit: basic implementation
  cloudinit: add dump command to pct

 src/PVE/API2/LXC.pm|  36 +++
 src/PVE/API2/LXC/Config.pm |   7 ++-
 src/PVE/CLI/pct.pm |   4 ++
 src/PVE/LXC.pm |   1 +
 src/PVE/LXC/Cloudinit.pm   | 125 +
 src/PVE/LXC/Config.pm  |  61 ++
 src/PVE/LXC/Makefile   |   1 +
 src/lxc-pve-prestart-hook  |   5 ++
 8 files changed, 239 insertions(+), 1 deletion(-)
 create mode 100644 src/PVE/LXC/Cloudinit.pm

manager:

Leo Nunner (2):
  cloudinit: rename qemu cloudinit panel
  cloudinit: introduce panel for LXCs

 www/manager6/Makefile  |   1 +
 www/manager6/lxc/CloudInit.js  | 219 +
 www/manager6/lxc/Config.js |   6 +
 www/manager6/qemu/CloudInit.js |   4 +-
 www/manager6/qemu/Config.js|   2 +-
 5 files changed, 229 insertions(+), 3 deletions(-)
 create mode 100644 www/manager6/lxc/CloudInit.js

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH RFC manager 1/2] cloudinit: rename qemu cloudinit panel

2023-05-11 Thread Leo Nunner
Signed-off-by: Leo Nunner 
---
 www/manager6/qemu/CloudInit.js | 4 ++--
 www/manager6/qemu/Config.js| 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/www/manager6/qemu/CloudInit.js b/www/manager6/qemu/CloudInit.js
index 77ff93d4..14117ff6 100644
--- a/www/manager6/qemu/CloudInit.js
+++ b/www/manager6/qemu/CloudInit.js
@@ -1,6 +1,6 @@
 Ext.define('PVE.qemu.CloudInit', {
 extend: 'Proxmox.grid.PendingObjectGrid',
-xtype: 'pveCiPanel',
+xtype: 'pveQemuCiPanel',
 
 onlineHelp: 'qm_cloud_init',
 
@@ -66,7 +66,7 @@ Ext.define('PVE.qemu.CloudInit', {
xtype: 'proxmoxButton',
disabled: true,
enableFn: function(rec) {
-   let view = this.up('pveCiPanel');
+   let view = this.up('pveQemuCiPanel');
return !!view.rows[rec.data.key].editor;
},
handler: function() {
diff --git a/www/manager6/qemu/Config.js b/www/manager6/qemu/Config.js
index 94c540c5..03e1e6d8 100644
--- a/www/manager6/qemu/Config.js
+++ b/www/manager6/qemu/Config.js
@@ -284,7 +284,7 @@ Ext.define('PVE.qemu.Config', {
title: 'Cloud-Init',
itemId: 'cloudinit',
iconCls: 'fa fa-cloud',
-   xtype: 'pveCiPanel',
+   xtype: 'pveQemuCiPanel',
},
{
title: gettext('Options'),
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH RFC container 2/3] cloudinit: basic implementation

2023-05-11 Thread Leo Nunner
The code to generate the actual configuration works pretty much the same
as with the VM system. We generate an instance ID by hashing the user
configuration, causing cloud-init to run every time said configuration
changes.

Instead of creating a config drive, we write files directly into the
volume of the container. We create a folder at
'/var/lib/cloud/seed/nocloud-net' and write the files 'user-data',
'vendor-data' and 'meta-data'. Cloud-init looks at the instance ID
inside 'meta-data' to decide whether it should run (again) or not.

Custom scripts need to be located inside the snippets directory, and
overwrite the default generated configuration file.

Signed-off-by: Leo Nunner 
---
 src/PVE/LXC.pm|   1 +
 src/PVE/LXC/Cloudinit.pm  | 114 ++
 src/PVE/LXC/Makefile  |   1 +
 src/lxc-pve-prestart-hook |   5 ++
 4 files changed, 121 insertions(+)
 create mode 100644 src/PVE/LXC/Cloudinit.pm

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index d138161..ea01fbb 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -39,6 +39,7 @@ use PVE::Tools qw(
 use PVE::Syscall qw(:fsmount);
 
 use PVE::LXC::CGroup;
+use PVE::LXC::Cloudinit;
 use PVE::LXC::Config;
 use PVE::LXC::Monitor;
 use PVE::LXC::Tools;
diff --git a/src/PVE/LXC/Cloudinit.pm b/src/PVE/LXC/Cloudinit.pm
new file mode 100644
index 000..e4bc67d
--- /dev/null
+++ b/src/PVE/LXC/Cloudinit.pm
@@ -0,0 +1,114 @@
+package PVE::LXC::Cloudinit;
+
+use strict;
+use warnings;
+
+use Digest::SHA;
+use File::Path;
+
+use PVE::LXC;
+
+sub gen_cloudinit_metadata {
+my ($user) = @_;
+
+my $uuid_str = Digest::SHA::sha1_hex($user);
+return cloudinit_metadata($uuid_str);
+}
+
+sub cloudinit_metadata {
+my ($uuid) = @_;
+my $raw = "";
+
+$raw .= "instance-id: $uuid\n";
+
+return $raw;
+}
+
+sub cloudinit_userdata {
+my ($conf) = @_;
+
+my $content = "#cloud-config\n";
+
+my $username = $conf->{ciuser};
+my $password = $conf->{cipassword};
+
+$content .= "user: $username\n" if defined($username);
+$content .= "password: $password\n" if defined($password);
+
+if (defined(my $keys = $conf->{sshkeys})) {
+   $keys = URI::Escape::uri_unescape($keys);
+   $keys = [map { my $key = $_; chomp $key; $key } split(/\n/, $keys)];
+   $keys = [grep { /\S/ } @$keys];
+   $content .= "ssh_authorized_keys:\n";
+   foreach my $k (@$keys) {
+   $content .= "  - $k\n";
+   }
+}
+$content .= "chpasswd:\n";
+$content .= "  expire: False\n";
+
+if (!defined($username) || $username ne 'root') {
+   $content .= "users:\n";
+   $content .= "  - default\n";
+}
+
+$content .= "package_upgrade: true\n" if $conf->{ciupdate};
+
+return $content;
+}
+
+sub read_cloudinit_snippets_file {
+my ($storage_conf, $volid) = @_;
+
+my ($full_path, undef, $type) = PVE::Storage::path($storage_conf, $volid);
+die "$volid is not in the snippets directory\n" if $type ne 'snippets';
+return PVE::Tools::file_get_contents($full_path, 1 * 1024 * 1024);
+}
+
+sub read_custom_cloudinit_files {
+my ($conf) = @_;
+
+my $cloudinit_conf = $conf->{cicustom};
+my $files = $cloudinit_conf ? 
PVE::JSONSchema::parse_property_string('pve-pct-cicustom', $cloudinit_conf) : 
{};
+
+my $user_volid = $files->{user};
+my $vendor_volid = $files->{vendor};
+
+my $storage_conf = PVE::Storage::config();
+
+my $user_data;
+if ($user_volid) {
+   $user_data = read_cloudinit_snippets_file($storage_conf, $user_volid);
+}
+
+my $vendor_data;
+if ($vendor_volid) {
+   $user_data = read_cloudinit_snippets_file($storage_conf, $vendor_volid);
+}
+
+return ($user_data, $vendor_data);
+}
+
+sub create_cloudinit_files {
+my ($conf, $setup) = @_;
+
+my $cloudinit_dir = "/var/lib/cloud/seed/nocloud-net";
+
+my ($user_data, $vendor_data) = read_custom_cloudinit_files($conf);
+$user_data = cloudinit_userdata($conf) if !defined($user_data);
+$vendor_data = '' if !defined($vendor_data);
+
+my $meta_data = gen_cloudinit_metadata($user_data);
+
+$setup->protected_call(sub {
+   my $plugin = $setup->{plugin};
+
+   $plugin->ct_make_path($cloudinit_dir);
+
+   $plugin->ct_file_set_contents("$cloudinit_dir/user-data", $user_data);
+   $plugin->ct_file_set_contents("$cloudinit_dir/vendor-data", 
$vendor_data);
+   $plugin->ct_file_set_contents("$cloudinit_dir/meta-data", $meta_data);
+});
+}
+
+1;
diff --git a/src/PVE/LXC/Makefile b/src/PVE/LXC/Makefile
index a190260..5d595ba 100644
--- a/src/PVE/LXC/Makefile
+++ b/src/PVE/LXC/Makefile
@@ -1,5 +1,6 @@
 SOURCES= \
CGroup.pm \
+   Cloudinit.pm \
Command.pm \
Config.pm \
Create.pm \
diff --git a/src/lxc-pve-prestart-hook b/src/lxc-pve-prestart-hook
index 3bdf7e4..e5932d2 100755
--- a/src/lxc-pve-prestart-hook
+++ b/src/lxc-pve-prestart-hook
@@ -12,6 +12,7 @@ use POSIX;
 use PVE::CGroup;

[pve-devel] [PATCH RFC container 3/3] cloudinit: add dump command to pct

2023-05-11 Thread Leo Nunner
Introduce a 'pct cloudinit dump  ' command to dump the
generated cloudinit configuration for a section.

Signed-off-by: Leo Nunner 
---
 src/PVE/API2/LXC.pm  | 33 +
 src/PVE/CLI/pct.pm   |  4 
 src/PVE/LXC/Cloudinit.pm | 11 +++
 3 files changed, 48 insertions(+)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index e585509..2cae727 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -2963,4 +2963,37 @@ __PACKAGE__->register_method({
 
return { socket => $socket };
 }});
+
+__PACKAGE__->register_method({
+name => 'cloudinit_generated_config_dump',
+path => '{vmid}/cloudinit/dump',
+method => 'GET',
+proxyto => 'node',
+description => "Get automatically generated cloudinit config.",
+permissions => {
+   check => ['perm', '/vms/{vmid}', [ 'VM.Audit' ]],
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   vmid => get_standard_option('pve-vmid', { completion => 
\&PVE::LXC::complete_ctid }),
+   type => {
+   description => 'Config type.',
+   type => 'string',
+   enum => ['user', 'meta'],
+   },
+   },
+},
+returns => {
+   type => 'string',
+},
+code => sub {
+   my ($param) = @_;
+
+   my $conf = PVE::LXC::Config->load_config($param->{vmid});
+
+   return PVE::LXC::Cloudinit::dump_cloudinit_config($conf, 
$param->{type});
+}});
+
 1;
diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index ff75d33..69f3560 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -1000,6 +1000,10 @@ our $cmddef = {
 rescan  => [ __PACKAGE__, 'rescan', []],
 cpusets => [ __PACKAGE__, 'cpusets', []],
 fstrim => [ __PACKAGE__, 'fstrim', ['vmid']],
+
+cloudinit => {
+   dump => [ "PVE::API2::LXC", 'cloudinit_generated_config_dump', ['vmid', 
'type'], { node => $nodename }, sub { print "$_[0]\n"; }],
+},
 };
 
 1;
diff --git a/src/PVE/LXC/Cloudinit.pm b/src/PVE/LXC/Cloudinit.pm
index e4bc67d..c977a08 100644
--- a/src/PVE/LXC/Cloudinit.pm
+++ b/src/PVE/LXC/Cloudinit.pm
@@ -111,4 +111,15 @@ sub create_cloudinit_files {
 });
 }
 
+sub dump_cloudinit_config {
+my ($conf, $type) = @_;
+
+if ($type eq 'user') {
+   return cloudinit_userdata($conf);
+} else { # metadata config
+   my $user = cloudinit_userdata($conf);
+   return gen_cloudinit_metadata($user);
+}
+}
+
 1;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH RFC container 1/3] cloudinit: introduce config parameters

2023-05-11 Thread Leo Nunner
Introduce configuration parameters for cloud-init. Like with VMs, it's
possible to specify:
- user
- password
- ssh keys
- enable/disable updates on first boot

It's also possible to pass through custom config files for the user and
vendor settings. We don't allow configuring the network through
cloud-init, since it will clash with whatever configuration we already
did for the container.

Signed-off-by: Leo Nunner 
---
 src/PVE/API2/LXC.pm|  3 ++
 src/PVE/API2/LXC/Config.pm |  7 -
 src/PVE/LXC/Config.pm  | 61 ++
 3 files changed, 70 insertions(+), 1 deletion(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 50c9eaf..e585509 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -2492,6 +2492,9 @@ __PACKAGE__->register_method({
 
my $pending_delete_hash = 
PVE::LXC::Config->parse_pending_delete($conf->{pending}->{delete});
 
+   $conf->{cipassword} = '**' if defined($conf->{cipassword});
+   $conf->{pending}->{cipassword} = '** ' if 
defined($conf->{pending}->{cipassword});
+
return PVE::GuestHelpers::config_with_pending_array($conf, 
$pending_delete_hash);
 }});
 
diff --git a/src/PVE/API2/LXC/Config.pm b/src/PVE/API2/LXC/Config.pm
index e6c0980..0ff4115 100644
--- a/src/PVE/API2/LXC/Config.pm
+++ b/src/PVE/API2/LXC/Config.pm
@@ -79,7 +79,7 @@ __PACKAGE__->register_method({
} else {
$conf = PVE::LXC::Config->load_current_config($param->{vmid}, 
$param->{current});
}
-
+   $conf->{cipassword} = '**' if $conf->{cipassword};
return $conf;
 }});
 
@@ -148,6 +148,11 @@ __PACKAGE__->register_method({
$param->{cpuunits} = PVE::CGroup::clamp_cpu_shares($param->{cpuunits})
if defined($param->{cpuunits}); # clamp value depending on cgroup 
version
 
+   if (defined(my $cipassword = $param->{cipassword})) {
+   $param->{cipassword} = PVE::Tools::encrypt_pw($cipassword)
+   if $cipassword !~ /^\$(?:[156]|2[ay])(\$.+){2}/;
+   }
+
my $code = sub {
 
my $conf = PVE::LXC::Config->load_config($vmid);
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index ac9db94..8aeb03b 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -442,6 +442,63 @@ my $features_desc = {
 },
 };
 
+my $cicustom_fmt = {
+user => {
+   type => 'string',
+   optional => 1,
+   description => 'To pass a custom file containing all user data to the 
container via cloud-init.',
+   format => 'pve-volume-id',
+   format_description => 'volume',
+},
+vendor => {
+   type => 'string',
+   optional => 1,
+   description => 'To pass a custom file containing all vendor data to the 
container via cloud-init.',
+   format => 'pve-volume-id',
+   format_description => 'volume',
+},
+};
+PVE::JSONSchema::register_format('pve-pct-cicustom', $cicustom_fmt);
+
+my $confdesc_cloudinit = {
+cienable => {
+   optional => 1,
+   type => 'boolean',
+   description => "cloud-init: provide cloud-init configuration to 
container.",
+},
+ciuser => {
+   optional => 1,
+   type => 'string',
+   description => "cloud-init: User name to change ssh keys and password 
for instead of the"
+   ." image's configured default user.",
+},
+cipassword => {
+   optional => 1,
+   type => 'string',
+   description => 'cloud-init: Password to assign the user. Using this is 
generally not'
+   .' recommended. Use ssh keys instead. Also note that older 
cloud-init versions do not'
+   .' support hashed passwords.',
+},
+ciupdate => {
+   optional => 1,
+   type => 'boolean',
+   description => 'cloud-init: do an automatic package update on boot.'
+},
+cicustom => {
+   optional => 1,
+   type => 'string',
+   description => 'cloud-init: Specify custom files to replace the 
automatically generated'
+   .' ones at start.',
+   format => 'pve-pct-cicustom',
+},
+sshkeys => {
+   optional => 1,
+   type => 'string',
+   format => 'urlencoded',
+   description => "cloud-init: Setup public SSH keys (one key per line, 
OpenSSH format).",
+},
+};
+
 my $confdesc = {
 lock => {
optional => 1,
@@ -614,6 +671,10 @@ my $confdesc = {
 },
 };
 
+foreach my $key (keys %$confdesc_cloudinit) {
+$confdesc->{$key} = $confdesc_cloudinit->{$key};
+}
+
 my $valid_lxc_conf_keys = {
 'lxc.apparmor.profile' => 1,
 'lxc.apparmor.allow_incomplete' => 1,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH RFC manager 2/2] cloudinit: introduce panel for LXCs

2023-05-11 Thread Leo Nunner
based on the already existing panel for VMs. Some things have been
changed, there is no network configuration, and a separate "enable"
options toggles cloud-init (simillar to adding/removing a cloud-init
drive for VMs).

Signed-off-by: Leo Nunner 
---
 www/manager6/Makefile |   1 +
 www/manager6/lxc/CloudInit.js | 219 ++
 www/manager6/lxc/Config.js|   6 +
 3 files changed, 226 insertions(+)
 create mode 100644 www/manager6/lxc/CloudInit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 2b577c8e..27ac9068 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -168,6 +168,7 @@ JSSRC=  
\
dc/UserTagAccessEdit.js \
dc/RegisteredTagsEdit.js\
lxc/CmdMenu.js  \
+   lxc/CloudInit.js\
lxc/Config.js   \
lxc/CreateWizard.js \
lxc/DNS.js  \
diff --git a/www/manager6/lxc/CloudInit.js b/www/manager6/lxc/CloudInit.js
new file mode 100644
index ..2e4e26ba
--- /dev/null
+++ b/www/manager6/lxc/CloudInit.js
@@ -0,0 +1,219 @@
+Ext.define('PVE.lxc.CloudInit', {
+extend: 'Proxmox.grid.PendingObjectGrid',
+xtype: 'pveLxcCiPanel',
+
+tbar: [
+   {
+   xtype: 'proxmoxButton',
+   disabled: true,
+   dangerous: true,
+   confirmMsg: function(rec) {
+   let view = this.up('grid');
+   var warn = gettext('Are you sure you want to remove entry {0}');
+
+   var entry = rec.data.key;
+   var msg = Ext.String.format(warn, "'"
+   + view.renderKey(entry, {}, rec) + "'");
+
+   return msg;
+   },
+   enableFn: function(record) {
+   let view = this.up('grid');
+   var caps = Ext.state.Manager.get('GuiCap');
+   if (view.rows[record.data.key].never_delete ||
+   !caps.vms['VM.Config.Network']) {
+   return false;
+   }
+
+   if (record.data.key === 'cipassword' && !record.data.value) {
+   return false;
+   }
+   return true;
+   },
+   handler: function() {
+   let view = this.up('grid');
+   let records = view.getSelection();
+   if (!records || !records.length) {
+   return;
+   }
+
+   var id = records[0].data.key;
+
+   var params = {};
+   params.delete = id;
+   Proxmox.Utils.API2Request({
+   url: view.baseurl + '/config',
+   waitMsgTarget: view,
+   method: 'PUT',
+   params: params,
+   failure: function(response, opts) {
+   Ext.Msg.alert('Error', response.htmlStatus);
+   },
+   callback: function() {
+   view.reload();
+   },
+   });
+   },
+   text: gettext('Remove'),
+   },
+   {
+   xtype: 'proxmoxButton',
+   disabled: true,
+   enableFn: function(rec) {
+   let view = this.up('pveLxcCiPanel');
+   return !!view.rows[rec.data.key].editor;
+   },
+   handler: function() {
+   let view = this.up('grid');
+   view.run_editor();
+   },
+   text: gettext('Edit'),
+   },
+],
+
+border: false,
+
+renderKey: function(key, metaData, rec, rowIndex, colIndex, store) {
+   var me = this;
+   var rows = me.rows;
+   var rowdef = rows[key] || {};
+
+   var icon = "";
+   if (rowdef.iconCls) {
+   icon = ' ';
+   }
+   return icon + (rowdef.header || key);
+},
+
+listeners: {
+   activate: function() {
+   var me = this;
+   me.rstore.startUpdate();
+   },
+   itemdblclick: function() {
+   var me = this;
+   me.run_editor();
+   },
+},
+
+initComponent: function() {
+   var me = this;
+
+   var nodename = me.pveSelNode.data.node;
+   if (!nodename) {
+   throw "no node name specified";
+   }
+
+   var vmid = me.pveSelNode.data.vmid;
+   if (!vmid) {
+   throw "no VM ID specified";
+   }
+   var caps = Ext.state.Manager.get('GuiCap');
+   me.baseurl = '/api2/extjs/nodes/' + nodename + '/lxc/' + vmid;
+   me.url = me.baseurl + '/pending';
+   me.editorConfig.url = me.baseurl + '/config';
+   me.editorConfig.pveSelNode = me.pveSelNode;
+
+   let caps_ci = caps.vms['VM.Config.Cloudinit'] || 
caps.vms['VM.Config.Network'];
+   /* editor is string and object */
+   me.rows 

Re: [pve-devel] [RFC PATCH common] section config: implement array support

2023-05-11 Thread Dominik Csapak

thanks for your feedback @fabian, @wolfgang!

so the consensus seems to be to simply expose the array in the api schema and
always have the client send the whole array over, like in pbs updater
(not a problem for my series, since in the gui we have the whole info anyway,
also if one want a custom api can always be created instead of using the
create/updateSchema methods)

I'd adapt my patch, and enable arrays in the pve-http-server instead of our
'-alist' format (which we only ever use in two places AFAICS) and replace those
by an array type

(there are a few things that must change in JSONSchema/CLIHandler to fully 
support arrays,
but that's only minor things, such as doing the untainting correctly)

i'd then remove support for the '-alist' format completely since it'll not
work anymore (at least in the api). FWICT this isn't even a real api
change, since the client would send the data in exactly the same way as
before, but we'll send the parameters along as arrays instead of \0-separated 
strings

Any other opinions @Thomas?

does that work for everybody?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server] disable SMM check: always return false for virt machine type

2023-05-11 Thread Fiona Ebner
There is no 'smm' flag for the 'virt' machine type.

Signed-off-by: Fiona Ebner 
---
 PVE/QemuServer.pm | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c1d0fd2d..ab33aa37 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3543,7 +3543,9 @@ sub query_understood_cpu_flags {
 # Since commit 277d33454f77ec1d1e0bc04e37621e4dd2424b67 in pve-qemu, smm is 
not off by default
 # anymore. But smm=off seems to be required when using SeaBIOS and serial 
display.
 my sub should_disable_smm {
-my ($conf, $vga) = @_;
+my ($conf, $vga, $machine) = @_;
+
+return if $machine =~ m/^virt/; # there is no smm flag that could be 
disabled
 
 return (!defined($conf->{bios}) || $conf->{bios} eq 'seabios') &&
$vga->{type} && $vga->{type} =~ m/^(serial\d+|none)$/;
@@ -4155,7 +4157,7 @@ sub config_to_command {
push @$machineFlags, 'accel=tcg';
 }
 
-push @$machineFlags, 'smm=off' if should_disable_smm($conf, $vga);
+push @$machineFlags, 'smm=off' if should_disable_smm($conf, $vga, 
$machine_type);
 
 my $machine_type_min = $machine_type;
 if ($add_pve_version) {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-container 1/1] Adding new mount point type named 'zfs' to let configure a ZFS dataset as mount point for LXC container

2023-05-11 Thread Fabian Grünbichler
> Konstantin  hat am 11.05.2023 13:56 CEST geschrieben:
> 
> 
> Hello,
> > nit: for single patches, there is no need to add a coverletter. also, 
> > please include relevant information in the commit message!
> I'm new here, so sorry - will follow rules in future.

no worries! check out https://pve.proxmox.com/wiki/Developer_Documentation if 
you haven't already :)

> >could you give a reason why you want to hide the container contents from the 
> >host?
> I'll try to explain my points. I'm using Proxmox as a base for my home NAS in 
> addition with possibility to setup some test environments to play around. So 
> one LXC container is playing NAS role - it has all required software 
> installed (samba/ftp/etc) and have a big volume mounted for data storage 
> (8-10TB). If it will be created and configured as proxmox builtin storage 
> volume (using ZFS storage provider) I have at least 3 points which I'm not 
> comfortable with:
> - this big dataset will be mounted to PVE host and will be visible and 
> accessible from host so every (for example) file search operation will be 
> affected by this dataset. I would like to narrow any such file operation only 
> to host related stuff, not to my NAS data;

most tools have ways to exclude certain paths ;)

> - in addition while operating on host I have a probability to accidentally 
> affect or destroy my NAS data so I'd like to avoid this possibility anyway;

IMHO that's what backups are for, but I get the point.

> - simple "pct destroy" command will destroy all proxmox storage provided 
> mount points as well. I'd like to avoid such possibilty anyway.

you could "protect" the guest:

$ pct set  -protection 1
$ pct destroy 
can't remove CT  - protection mode enabled


another alternative would be to use a (protected) VM - no regular ZFS dataset, 
no visibility on the host.

> As I see in pve-container code - only bind mount and block device mount can 
> be used as non-proxmox volume. But bind mount isn't acceptable for me 
> according to points above. ZFS dataset isn't a block device - so it cannot be 
> mounted using standard notation in LXC config. That's why I'm proposing this 
> patch - it adds the capality to use ZFS filesystem as mount point for LXC 
> container. With this functionality I can just add the following line (or 
> configure with pct) to LXC container config:
> mp1: tank/nas-data,mp=/data
> And after that ZFS dataset "tank/nas-data" will be mounted inside container 
> and will not be exposed to host (of course mountpoint=legacy should be set 
> for this dataset). Maybe other more elegant ways possible to implement this 
> but this the only way I've found.

the two existing special cases besides PVE-managed volumes have a rather big 
use case - passing through existing hard disks or partitions (shared with VMs, 
which have the same feature), and passing in host directories. this would be a 
very niche feature only applying to one specific storage type, so changing the 
syntax (and adding checks all over the place) would not be worth it.

but like I said, it can be implemented more properly as well - currently we say 
"a non-snapshot ZFS volume managed by PVE is always mounted on the host and can 
be bind-mounted", but we could just as well say "a non-snapshot ZFS volume is 
mounted directly via mount", either in general, or opt-in via flag in 
storage.cfg, or just for mountpoint=legacy/none mountpoints (e.g., where 
PVE::Storage::path returns a special value? or ..). nothing with regards to the 
container config syntax would change, just the mountpoint handling in 
pve-container (and the part in pve-storage that currently does the host-side 
mounting in activate_volume).


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 2/2] fast plug options: add migrate_downtime and migrate_speed

2023-05-11 Thread Fiona Ebner
for convenience. These options do not influence the QEMU instance
directly, but are only used for migration, so no need to keep them in
pending.

Signed-off-by: Fiona Ebner 
---
 PVE/QemuServer.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 4f3418ae..7ba0b4b8 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4909,6 +4909,8 @@ my $fast_plug_option = {
 'description' => 1,
 'hookscript' => 1,
 'lock' => 1,
+'migrate_downtime' => 1,
+'migrate_speed' => 1,
 'name' => 1,
 'onboot' => 1,
 'protection' => 1,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 1/2] fast plug options: order alphabetically

2023-05-11 Thread Fiona Ebner
Signed-off-by: Fiona Ebner 
---
 PVE/QemuServer.pm | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c1d0fd2d..4f3418ae 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4906,16 +4906,16 @@ sub foreach_volid {
 }
 
 my $fast_plug_option = {
+'description' => 1,
+'hookscript' => 1,
 'lock' => 1,
 'name' => 1,
 'onboot' => 1,
+'protection' => 1,
 'shares' => 1,
 'startup' => 1,
-'description' => 1,
-'protection' => 1,
-'vmstatestorage' => 1,
-'hookscript' => 1,
 'tags' => 1,
+'vmstatestorage' => 1,
 };
 
 for my $opt (keys %$confdesc_cloudinit) {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel