[pve-devel] [PATCH v2 container 1/2] setup: fix creating unmanaged containers

2021-10-05 Thread Oguz Bektas
ssh_host_key_generate did not explicitly return in unmanaged plugin,
causing the post_create_hook to fail because of an invalid hash
reference (cannot use "1" as a HASH ref, "1" was likely being returned
implicitly as the scalar value of 'my ($self) = @_;')

Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC/Setup/Unmanaged.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/PVE/LXC/Setup/Unmanaged.pm b/src/PVE/LXC/Setup/Unmanaged.pm
index 38e245f..3b9febf 100644
--- a/src/PVE/LXC/Setup/Unmanaged.pm
+++ b/src/PVE/LXC/Setup/Unmanaged.pm
@@ -51,6 +51,7 @@ sub unified_cgroupv2_support {
 
 sub ssh_host_key_types_to_generate {
 my ($self) = @_;
+return;
 }
 
 # hooks
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 container 0/2] unmanaged containers

2021-10-05 Thread Oguz Bektas
minor fix for creating unmanaged containers

v1->v2:
* return nothing instead of an empty hash ref
* separate patch for dropping early unmanaged return

Oguz Bektas (2):
  setup: fix creating unmanaged containers
  setup: drop remaining unmanaged return

 src/PVE/LXC/Setup.pm   | 2 --
 src/PVE/LXC/Setup/Unmanaged.pm | 1 +
 2 files changed, 1 insertion(+), 2 deletions(-)

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 container 2/2] setup: drop remaining unmanaged return

2021-10-05 Thread Oguz Bektas
not needed anymore since we have 'unmanaged' plugin, so $self->{plugin}
would be 'unmanaged' and get handled by that instead of an early return.

Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC/Setup.pm | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
index cfbe02c..4e211ef 100644
--- a/src/PVE/LXC/Setup.pm
+++ b/src/PVE/LXC/Setup.pm
@@ -228,8 +228,6 @@ my sub generate_ssh_key { # create temporary key in hosts' 
/run, then read and u
 sub rewrite_ssh_host_keys {
 my ($self) = @_;
 
-return if !$self->{plugin}; # unmanaged
-
 my $plugin = $self->{plugin};
 
 my $keynames = $plugin->ssh_host_key_types_to_generate();
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH v2 container 0/2] unmanaged containers

2021-10-05 Thread Thomas Lamprecht
On 05.10.21 10:09, Oguz Bektas wrote:
> minor fix for creating unmanaged containers
> 
> v1->v2:
> * return nothing instead of an empty hash ref
> * separate patch for dropping early unmanaged return
> 
> Oguz Bektas (2):
>   setup: fix creating unmanaged containers
>   setup: drop remaining unmanaged return
> 
>  src/PVE/LXC/Setup.pm   | 2 --
>  src/PVE/LXC/Setup/Unmanaged.pm | 1 +
>  2 files changed, 1 insertion(+), 2 deletions(-)
> 



applied both patches, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 qemu-server] qemu-agent: allow hotplug of fstrim_cloned_disk option.

2021-10-05 Thread Alexandre Derumier
This option don't have any impact on device itself.

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 076ce59..907d522 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4829,6 +4829,8 @@ sub vmconfig_hotplug_pending {
} elsif ($opt eq 'cpulimit') {
my $cpulimit = $conf->{pending}->{$opt} == 0 ? -1 : 
int($conf->{pending}->{$opt} * 10);
$cgroup->change_cpu_quota($cpulimit, 10);
+   } elsif ($opt eq 'agent') {
+   vmconfig_update_agent($conf, $opt, $value);
} else {
die "skip\n";  # skip non-hot-pluggable options
}
@@ -4988,6 +4990,23 @@ sub vmconfig_update_net {
 }
 }
 
+sub vmconfig_update_agent {
+my ($conf, $opt, $value) = @_;
+
+if ($conf->{$opt} && (my $old_agent = parse_guest_agent($conf))) {
+
+   my $agent = parse_guest_agent({$opt => $value});
+
+   # skip non hotpluggable value
+   if (safe_string_ne($agent->{enabled}, $old_agent->{enabled}) ||
+   safe_string_ne($agent->{type}, $old_agent->{type})) {
+   die "skip\n";
+   }
+} else {
+   die "skip\n";
+}
+}
+
 sub vmconfig_update_disk {
 my ($storecfg, $conf, $hotplug, $vmid, $opt, $value, $arch, $machine_type) 
= @_;
 
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server] qemu-agent: allow hotplug of fstrim_cloned_disk option.

2021-10-05 Thread DERUMIER, Alexandre




+sub vmconfig_update_agent {
+my ($conf, $opt, $value) = @_;
+
+if ($conf->{$opt} && (my $old_agent = parse_guest_agent($conf))) {
+
+   my $agent = parse_guest_agent({$opt => $value});
+
+   # skip non hotpluggable value
+   if (safe_string_ne($agent->{enabled}, $old_agent->{enabled}) ||
+   safe_string_ne($agent->{type}, $old_agent->{type})) {
+   die "skip\n";
+   }
+}
+die "skip\n";

but this method always skips no matter what?


oh, sorry, it should be in an else (agent is disabled)

I just sent a fixed v2

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 qemu-server] qemu-agent: allow hotplug of fstrim_cloned_disk option.

2021-10-05 Thread Fabian Grünbichler
On October 5, 2021 11:46 am, Alexandre Derumier wrote:
> This option don't have any impact on device itself.
> 
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/QemuServer.pm | 19 +++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 076ce59..907d522 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -4829,6 +4829,8 @@ sub vmconfig_hotplug_pending {
>   } elsif ($opt eq 'cpulimit') {
>   my $cpulimit = $conf->{pending}->{$opt} == 0 ? -1 : 
> int($conf->{pending}->{$opt} * 10);
>   $cgroup->change_cpu_quota($cpulimit, 10);
> + } elsif ($opt eq 'agent') {
> + vmconfig_update_agent($conf, $opt, $value);
>   } else {
>   die "skip\n";  # skip non-hot-pluggable options
>   }
> @@ -4988,6 +4990,23 @@ sub vmconfig_update_net {
>  }
>  }
>  
> +sub vmconfig_update_agent {
> +my ($conf, $opt, $value) = @_;
> +
> +if ($conf->{$opt} && (my $old_agent = parse_guest_agent($conf))) {
> +
> + my $agent = parse_guest_agent({$opt => $value});
> +
> + # skip non hotpluggable value

shouldn't this be the other way round? check keys which are different, 
and have a list of hotpluggable ones, skip if any others are different?

that way if we add another property to the agent it's fail-safe 
(defaults to not being hotpluggable) until it is added to the explicit 
list.

> + if (safe_string_ne($agent->{enabled}, $old_agent->{enabled}) ||
> + safe_string_ne($agent->{type}, $old_agent->{type})) {
> + die "skip\n";
> + }
> +} else {
> + die "skip\n";
> +}
> +}
> +
>  sub vmconfig_update_disk {
>  my ($storecfg, $conf, $hotplug, $vmid, $opt, $value, $arch, 
> $machine_type) = @_;
>  
> -- 
> 2.30.2
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 2/7] ui: lxc/MPEdit: fire diskidchange event

2021-10-05 Thread Dominik Csapak
when the diskid changes

Signed-off-by: Dominik Csapak 
---
 www/manager6/lxc/MPEdit.js | 5 +
 1 file changed, 5 insertions(+)

diff --git a/www/manager6/lxc/MPEdit.js b/www/manager6/lxc/MPEdit.js
index 64e57229..2b4f8ebe 100644
--- a/www/manager6/lxc/MPEdit.js
+++ b/www/manager6/lxc/MPEdit.js
@@ -110,6 +110,11 @@ Ext.define('PVE.lxc.MountPointInputPanel', {
control: {
'field[name=mpid]': {
change: function(field, value) {
+   let me = this;
+   let view = this.getView();
+   if (view.confid !== 'rootfs') {
+   view.fireEvent('diskidchange', view, `mp${value}`);
+   }
field.validate();
},
},
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 3/7] ui: lxc/MPEdit: add selectFree toggle

2021-10-05 Thread Dominik Csapak
that sets the given vmconfig at the start and selects the first
free mpid

Signed-off-by: Dominik Csapak 
---
 www/manager6/lxc/MPEdit.js | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/www/manager6/lxc/MPEdit.js b/www/manager6/lxc/MPEdit.js
index 2b4f8ebe..dba69cb4 100644
--- a/www/manager6/lxc/MPEdit.js
+++ b/www/manager6/lxc/MPEdit.js
@@ -149,6 +149,9 @@ Ext.define('PVE.lxc.MountPointInputPanel', {
view.filterMountOptions();
}
}
+   if (view.selectFree) {
+   view.setVMConfig(view.vmconfig);
+   }
},
 },
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 1/7] ui: lxc/MPEdit: add updateVMConfig

2021-10-05 Thread Dominik Csapak
helper for the upcoming MultiMPEdit

Signed-off-by: Dominik Csapak 
---
 www/manager6/lxc/MPEdit.js | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/www/manager6/lxc/MPEdit.js b/www/manager6/lxc/MPEdit.js
index 0e772017..64e57229 100644
--- a/www/manager6/lxc/MPEdit.js
+++ b/www/manager6/lxc/MPEdit.js
@@ -75,12 +75,18 @@ Ext.define('PVE.lxc.MountPointInputPanel', {
}
 },
 
-setVMConfig: function(vmconfig) {
+updateVMConfig: function(vmconfig) {
let me = this;
let vm = me.getViewModel();
me.vmconfig = vmconfig;
vm.set('unpriv', vmconfig.unprivileged);
+   me.down('field[name=mpid]').validate();
+},
+
+setVMConfig: function(vmconfig) {
+   let me = this;
 
+   me.updateVMConfig(vmconfig);
PVE.Utils.forEachMP((bus, i) => {
let name = "mp" + i.toString();
if (!Ext.isDefined(vmconfig[name])) {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 6/7] ui: add qemu/MultiHDEdit and use it in the wizard

2021-10-05 Thread Dominik Csapak
uses the MultiDiskPanel as base and implements the necessary
functions/variables

this allows now to create a vm also without any disk

Signed-off-by: Dominik Csapak 
---
 www/manager6/Makefile |  1 +
 www/manager6/qemu/CreateWizard.js |  7 +---
 www/manager6/qemu/HDEdit.js   |  9 -
 www/manager6/qemu/MultiHDEdit.js  | 62 +++
 4 files changed, 73 insertions(+), 6 deletions(-)
 create mode 100644 www/manager6/qemu/MultiHDEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 3b9b057a..04c634f0 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -215,6 +215,7 @@ JSSRC=  
\
qemu/MachineEdit.js \
qemu/MemoryEdit.js  \
qemu/Monitor.js \
+   qemu/MultiHDEdit.js \
qemu/NetworkEdit.js \
qemu/OSDefaults.js  \
qemu/OSTypeEdit.js  \
diff --git a/www/manager6/qemu/CreateWizard.js 
b/www/manager6/qemu/CreateWizard.js
index 015a099d..a785a882 100644
--- a/www/manager6/qemu/CreateWizard.js
+++ b/www/manager6/qemu/CreateWizard.js
@@ -154,14 +154,11 @@ Ext.define('PVE.qemu.CreateWizard', {
insideWizard: true,
},
{
-   xtype: 'pveQemuHDInputPanel',
-   padding: 0,
+   xtype: 'pveMultiHDPanel',
bind: {
nodename: '{nodename}',
},
-   title: gettext('Hard Disk'),
-   isCreate: true,
-   insideWizard: true,
+   title: gettext('Disks'),
},
{
xtype: 'pveQemuProcessorPanel',
diff --git a/www/manager6/qemu/HDEdit.js b/www/manager6/qemu/HDEdit.js
index 2142c746..9c453b2a 100644
--- a/www/manager6/qemu/HDEdit.js
+++ b/www/manager6/qemu/HDEdit.js
@@ -107,6 +107,12 @@ Ext.define('PVE.qemu.HDInputPanel', {
return params;
 },
 
+updateVMConfig: function(vmconfig) {
+   var me = this;
+   me.vmconfig = vmconfig;
+   me.bussel?.updateVMConfig(vmconfig);
+},
+
 setVMConfig: function(vmconfig) {
var me = this;
 
@@ -183,7 +189,8 @@ Ext.define('PVE.qemu.HDInputPanel', {
 
if (!me.confid || me.unused) {
me.bussel = Ext.create('PVE.form.ControllerSelector', {
-   vmconfig: me.insideWizard ? { ide2: 'cdrom' } : {},
+   vmconfig: me.vmconfig,
+   selectFree: true,
});
column1.push(me.bussel);
 
diff --git a/www/manager6/qemu/MultiHDEdit.js b/www/manager6/qemu/MultiHDEdit.js
new file mode 100644
index ..caf74fad
--- /dev/null
+++ b/www/manager6/qemu/MultiHDEdit.js
@@ -0,0 +1,62 @@
+Ext.define('PVE.qemu.MultiHDPanel', {
+extend: 'PVE.panel.MultiDiskPanel',
+alias: 'widget.pveMultiHDPanel',
+
+onlineHelp: 'qm_hard_disk',
+
+controller: {
+   xclass: 'Ext.app.ViewController',
+
+   // maxCount is the sum of all controller ids - 1 (ide2 is fixed in the 
wizard)
+   maxCount: Object.values(PVE.Utils.diskControllerMaxIDs)
+   .reduce((previous, current) => previous+current, 0) - 1,
+
+   getNextFreeDisk: function(vmconfig) {
+   let clist = PVE.Utils.sortByPreviousUsage(vmconfig);
+   return PVE.Utils.nextFreeDisk(clist, vmconfig);
+   },
+
+   addPanel: function(itemId, vmconfig, nextFreeDisk) {
+   let me = this;
+   return me.getView().add({
+   vmconfig,
+   border: false,
+   showAdvanced: 
Ext.state.Manager.getProvider().get('proxmox-advanced-cb'),
+   xtype: 'pveQemuHDInputPanel',
+   bind: {
+   nodename: '{nodename}',
+   },
+   padding: '0 0 0 5',
+   itemId,
+   isCreate: true,
+   insideWizard: true,
+   });
+   },
+
+   getBaseVMConfig: function() {
+   let me = this;
+   let vm = me.getViewModel();
+
+   return {
+   ide2: 'media=cdrom',
+   scsihw: vm.get('current.scsihw'),
+   ostype: vm.get('current.ostype'),
+   };
+   },
+
+   diskSorter: {
+   sorterFn: function(rec1, rec2) {
+   let [, name1, id1] = PVE.Utils.bus_match.exec(rec1.data.name);
+   let [, name2, id2] = PVE.Utils.bus_match.exec(rec2.data.name);
+
+   if (name1 === name2) {
+   return parseInt(id1, 10) - parseInt(id2, 10);
+   }
+
+   return name1 < name2 ? -1 : 1;
+   },
+   },
+
+   deleteDisabled: () => false,
+},
+});
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 4/7] ui: add MultiDiskPanel

2021-10-05 Thread Dominik Csapak
this adds a new panel where a user can add multiple disks, intended
for use in the wizard.

Has a simple grid for displaying the already added disks and displays
a warning triangle if the disk is not valid.

this is a base panel for adding multiple disks/mps for vms/ct
respectively.

this combines the shared behavior and layout and defines the functions
that subclasses must define

Signed-off-by: Dominik Csapak 
---
 www/manager6/Makefile   |   1 +
 www/manager6/panel/MultiDiskEdit.js | 272 
 2 files changed, 273 insertions(+)
 create mode 100644 www/manager6/panel/MultiDiskEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 7d491f57..dc045e73 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -88,6 +88,7 @@ JSSRC=
\
panel/GuestStatusView.js\
panel/GuestSummary.js   \
panel/TemplateStatusView.js \
+   panel/MultiDiskEdit.js  \
tree/ResourceTree.js\
tree/SnapshotTree.js\
window/Backup.js\
diff --git a/www/manager6/panel/MultiDiskEdit.js 
b/www/manager6/panel/MultiDiskEdit.js
new file mode 100644
index ..ea1f974d
--- /dev/null
+++ b/www/manager6/panel/MultiDiskEdit.js
@@ -0,0 +1,272 @@
+Ext.define('PVE.panel.MultiDiskPanel', {
+extend: 'Ext.panel.Panel',
+
+setNodename: function(nodename) {
+   this.items.each((panel) => panel.setNodename(nodename));
+},
+
+border: false,
+bodyBorder: false,
+
+layout: 'card',
+
+controller: {
+   xclass: 'Ext.app.ViewController',
+
+   vmconfig: {},
+
+   onAdd: function() {
+   let me = this;
+   me.lookup('addButton').setDisabled(true);
+   me.addDisk();
+   let count = me.lookup('grid').getStore().getCount() + 1; // +1 is 
from ide2
+   me.lookup('addButton').setDisabled(count >= me.maxCount);
+   },
+
+   getNextFreeDisk: function(vmconfig) {
+   throw "implement in subclass";
+   },
+
+   addPanel: function(itemId, vmconfig, nextFreeDisk) {
+   throw "implement in subclass";
+   },
+
+   // define in subclass
+   diskSorter: undefined,
+
+   addDisk: function() {
+   let me = this;
+   let grid = me.lookup('grid');
+   let store = grid.getStore();
+
+   // get free disk id
+   let vmconfig = me.getVMConfig(true);
+   let nextFreeDisk = me.getNextFreeDisk(vmconfig);
+   if (!nextFreeDisk) {
+   return;
+   }
+
+   // add store entry + panel
+   let itemId = 'disk-card-' + ++Ext.idSeed;
+   let rec = store.add({
+   name: nextFreeDisk.confid,
+   itemId,
+   })[0];
+
+   let panel = me.addPanel(itemId, vmconfig, nextFreeDisk);
+   panel.updateVMConfig(vmconfig);
+
+   // we need to setup a validitychange handler, so that we can show
+   // that a disk has invalid fields
+   let fields = panel.query('field');
+   fields.forEach((el) => el.on('validitychange', () => {
+   let valid = fields.every((field) => field.isValid());
+   rec.set('valid', valid);
+   me.checkValidity();
+   }));
+
+   store.sort(me.diskSorter);
+
+   // select if the panel added is the only one
+   if (store.getCount() === 1) {
+   grid.getSelectionModel().select(0, false);
+   }
+   },
+
+   getBaseVMConfig: function() {
+   throw "implement in subclass";
+   },
+
+   getVMConfig: function(all) {
+   let me = this;
+
+   let vmconfig = me.getBaseVMConfig();
+
+   me.lookup('grid').getStore().each((rec) => {
+   if (all || rec.get('valid')) {
+   vmconfig[rec.get('name')] = rec.get('itemId');
+   }
+   });
+
+   return vmconfig;
+   },
+
+   checkValidity: function() {
+   let me = this;
+   let valid = me.lookup('grid').getStore().findExact('valid', false) 
=== -1;
+   me.lookup('validationfield').setValue(valid);
+   },
+
+   updateVMConfig: function() {
+   let me = this;
+   let view = me.getView();
+   let grid = me.lookup('grid');
+   let store = grid.getStore();
+
+   let vmconfig = me.getVMConfig();
+
+   let valid = true;
+
+   store.each((rec) => {
+   let itemId = rec.get('itemId');
+   let name = rec.get('name');
+   let panel = view.getComponent(itemId);
+   if (!panel) {
+   throw "unexpected missing panel";
+   }
+
+   // copy config for each p

[pve-devel] [PATCH manager v3 7/7] ui: window/Wizard: make it a little wider

2021-10-05 Thread Dominik Csapak
for the multi disk panel, we want it to be just a little wider, so
that all form fields are still readable

Signed-off-by: Dominik Csapak 
---
 www/manager6/window/Wizard.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/window/Wizard.js b/www/manager6/window/Wizard.js
index d12f4d90..98e46d44 100644
--- a/www/manager6/window/Wizard.js
+++ b/www/manager6/window/Wizard.js
@@ -3,7 +3,7 @@ Ext.define('PVE.window.Wizard', {
 
 activeTitle: '', // used for automated testing
 
-width: 700,
+width: 720,
 height: 510,
 
 modal: true,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 5/7] ui: add lxc/MultiMPEdit and use in lxc/CreateWizard

2021-10-05 Thread Dominik Csapak
uses the MultiDiskPanel as a base and implements the necessary
functions/values

Signed-off-by: Dominik Csapak 
---
 www/manager6/Makefile|  1 +
 www/manager6/lxc/CreateWizard.js |  8 +---
 www/manager6/lxc/MultiMPEdit.js  | 79 
 3 files changed, 82 insertions(+), 6 deletions(-)
 create mode 100644 www/manager6/lxc/MultiMPEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index dc045e73..3b9b057a 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -166,6 +166,7 @@ JSSRC=  
\
lxc/Options.js  \
lxc/ResourceEdit.js \
lxc/Resources.js\
+   lxc/MultiMPEdit.js  \
menu/MenuItem.js\
menu/TemplateMenu.js\
ceph/CephInstallWizard.js   \
diff --git a/www/manager6/lxc/CreateWizard.js b/www/manager6/lxc/CreateWizard.js
index aead515f..1f902c2c 100644
--- a/www/manager6/lxc/CreateWizard.js
+++ b/www/manager6/lxc/CreateWizard.js
@@ -197,15 +197,11 @@ Ext.define('PVE.lxc.CreateWizard', {
],
},
{
-   xtype: 'pveLxcMountPointInputPanel',
-   title: gettext('Root Disk'),
+   xtype: 'pveMultiMPPanel',
+   title: gettext('Disks'),
insideWizard: true,
isCreate: true,
unused: false,
-   bind: {
-   nodename: '{nodename}',
-   unprivileged: '{unprivileged}',
-   },
confid: 'rootfs',
},
{
diff --git a/www/manager6/lxc/MultiMPEdit.js b/www/manager6/lxc/MultiMPEdit.js
new file mode 100644
index ..709dacb1
--- /dev/null
+++ b/www/manager6/lxc/MultiMPEdit.js
@@ -0,0 +1,79 @@
+Ext.define('PVE.lxc.MultiMPPanel', {
+extend: 'PVE.panel.MultiDiskPanel',
+alias: 'widget.pveMultiMPPanel',
+
+onlineHelp: 'pct_container_storage',
+
+controller: {
+   xclass: 'Ext.app.ViewController',
+
+   // count of mps + rootfs
+   maxCount: PVE.Utils.mp_counts.mps + 1,
+
+   getNextFreeDisk: function(vmconfig) {
+   let nextFreeDisk;
+   if (!vmconfig.rootfs) {
+   return {
+   confid: 'rootfs',
+   };
+   } else {
+   for (let i = 0; i < PVE.Utils.mp_counts.mps; i++) {
+   let confid = `mp${i}`;
+   if (!vmconfig[confid]) {
+   nextFreeDisk = {
+   confid,
+   };
+   break;
+   }
+   }
+   }
+   return nextFreeDisk;
+   },
+
+   addPanel: function(itemId, vmconfig, nextFreeDisk) {
+   let me = this;
+   return me.getView().add({
+   vmconfig,
+   border: false,
+   showAdvanced: 
Ext.state.Manager.getProvider().get('proxmox-advanced-cb'),
+   xtype: 'pveLxcMountPointInputPanel',
+   confid: nextFreeDisk.confid === 'rootfs' ? 'rootfs' : null,
+   bind: {
+   nodename: '{nodename}',
+   unprivileged: '{unprivileged}',
+   },
+   padding: '0 5 0 10',
+   itemId,
+   selectFree: true,
+   isCreate: true,
+   insideWizard: true,
+   });
+   },
+
+   getBaseVMConfig: function() {
+   let me = this;
+
+   return {
+   unprivileged: me.getViewModel().get('unprivileged'),
+   };
+   },
+
+   diskSorter: {
+   sorterFn: function(rec1, rec2) {
+   if (rec1.data.name === 'rootfs') {
+   return -1;
+   } else if (rec2.data.name === 'rootfs') {
+   return 1;
+   }
+
+   let mp_match = /^mp(\d+)$/;
+   let [, id1] = mp_match.exec(rec1.data.name);
+   let [, id2] = mp_match.exec(rec2.data.name);
+
+   return parseInt(id1, 10) - parseInt(id2, 10);
+   },
+   },
+
+   deleteDisabled: (view, rI, cI, item, rec) => rec.data.name === 'rootfs',
+},
+});
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 0/7] multi disk/mp in wizard

2021-10-05 Thread Dominik Csapak
this series is a continuation of my previous multi tab / disk series[0]

Introduces multi disk panels for vm and containers in the wizard.

The UX is modeled after Dominics first attempt, but a very different
approach code-wise. instead of having a seperate 'data' panel that
contains the vm config, let the multi disk panel handle that
and pass it through to the panels below. this way the HDEdit does
not need a big code-change to get/set the config.

changes from v2:
* rebase on master (multi tab disk panel already applied)
* refactor multi disk panel so that we can reuse it for containers
* implement multi mp panel for container

changes from v1:
* fixed a bug which prevented the wizard from finishing
* made the wizard a little wider so that the form field labes are
  readable
* added logic to use the ostype to determine the first disk if one
  deleted all before

0: https://lists.proxmox.com/pipermail/pve-devel/2021-October/050215.html

Dominik Csapak (7):
  ui: lxc/MPEdit: add updateVMConfig
  ui: lxc/MPEdit: fire diskidchange event
  ui: lxc/MPEdit: add selectFree toggle
  ui: add MultiDiskPanel
  ui: add lxc/MultiMPEdit and use in lxc/CreateWizard
  ui: add qemu/MultiHDEdit and use it in the wizard
  ui: window/Wizard: make it a little wider

 www/manager6/Makefile   |   3 +
 www/manager6/lxc/CreateWizard.js|   8 +-
 www/manager6/lxc/MPEdit.js  |  16 +-
 www/manager6/lxc/MultiMPEdit.js |  79 
 www/manager6/panel/MultiDiskEdit.js | 272 
 www/manager6/qemu/CreateWizard.js   |   7 +-
 www/manager6/qemu/HDEdit.js |   9 +-
 www/manager6/qemu/MultiHDEdit.js|  62 +++
 www/manager6/window/Wizard.js   |   2 +-
 9 files changed, 444 insertions(+), 14 deletions(-)
 create mode 100644 www/manager6/lxc/MultiMPEdit.js
 create mode 100644 www/manager6/panel/MultiDiskEdit.js
 create mode 100644 www/manager6/qemu/MultiHDEdit.js

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 qemu-server] qemu-agent: allow hotplug of fstrim_cloned_disk option.

2021-10-05 Thread DERUMIER, Alexandre
Le mardi 05 octobre 2021 à 13:12 +0200, Fabian Grünbichler a écrit :
> shouldn't this be the other way round? check keys which are
> different, 
> and have a list of hotpluggable ones, skip if any others are
> different?
> 
> that way if we add another property to the agent it's fail-safe 
> (defaults to not being hotpluggable) until it is added to the
> explicit 
> list

yes, sure. I'll rework the patch and send a v3 tomorrow.
thanks for the review !

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 0/3] fix #3258: check for in-use pci devices on vm start

2021-10-05 Thread Dominik Csapak
by having a vmid <-> pciid mapping in /var/run
i did not check if the vm has the pci device really in the config,
but we should not need that, since we remove the reservation again
in the cleanup step.

if wanted we can of course parse the target vms config and check if
the pci device is still configured, or alternatively, ask qmp and or
parse the /proc/PID/cmdline for the pcidevice, but both options seem
too expensive?

Dominik Csapak (3):
  pci: to not capture first group in PCIRE
  pci: add helpers to (un)reserve pciids for a vm
  fix #3258: block vm start when pci device is already in use

 PVE/QemuServer.pm |  8 
 PVE/QemuServer/PCI.pm | 91 ++-
 2 files changed, 98 insertions(+), 1 deletion(-)

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 1/3] pci: to not capture first group in PCIRE

2021-10-05 Thread Dominik Csapak
we do not need this group, but want to use the regex where we have
multiple groups, so make it a non-capture group

Signed-off-by: Dominik Csapak 
---
 PVE/QemuServer/PCI.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 2ee142f..5608207 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -17,7 +17,7 @@ parse_hostpci
 
 our $MAX_HOSTPCI_DEVICES = 16;
 
-my $PCIRE = qr/([a-f0-9]{4}:)?[a-f0-9]{2}:[a-f0-9]{2}(?:\.[a-f0-9])?/;
+my $PCIRE = qr/(?:[a-f0-9]{4}:)?[a-f0-9]{2}:[a-f0-9]{2}(?:\.[a-f0-9])?/;
 my $hostpci_fmt = {
 host => {
default_key => 1,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 3/3] fix #3258: block vm start when pci device is already in use

2021-10-05 Thread Dominik Csapak
on vm start, we reserve all pciids that we use, and
remove the reservation again in vm_stop_cleanup

this way, when a vm starts with a pci device that is already configured
for a different running vm, will not be started and the user gets
the error that the device is already in use

Signed-off-by: Dominik Csapak 
---
 PVE/QemuServer.pm | 8 
 1 file changed, 8 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 076ce59..1e8cd53 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5365,6 +5365,13 @@ sub vm_start_nolock {
   my $d = parse_hostpci($conf->{"hostpci$i"});
   next if !$d;
   my $pcidevices = $d->{pciid};
+
+  # reserve all pciids
+  foreach my $pcidevice (@$pcidevices) {
+ my $pciid = $pcidevice->{id};
+ PVE::QemuServer::PCI::reserve_pci_usage($pciid, $vmid);
+  }
+
   foreach my $pcidevice (@$pcidevices) {
my $pciid = $pcidevice->{id};
 
@@ -5676,6 +5683,7 @@ sub vm_stop_cleanup {
foreach my $pci (@{$d->{pciid}}) {
my $pciid = $pci->{id};
PVE::SysFSTools::pci_cleanup_mdev_device($pciid, $uuid);
+   PVE::QemuServer::PCI::remove_pci_reservation($pciid);
}
}
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 2/3] pci: add helpers to (un)reserve pciids for a vm

2021-10-05 Thread Dominik Csapak
saves a list of pciid <-> vmid mappings in /var/run
that we can check when we start a vm

Signed-off-by: Dominik Csapak 
---
 PVE/QemuServer/PCI.pm | 89 +++
 1 file changed, 89 insertions(+)

diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 5608207..0626b76 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -5,6 +5,7 @@ use strict;
 
 use PVE::JSONSchema;
 use PVE::SysFSTools;
+use PVE::Tools;
 
 use base 'Exporter';
 
@@ -461,4 +462,92 @@ sub print_hostpci_devices {
 return ($kvm_off, $gpu_passthrough, $legacy_igd);
 }
 
+my $PCIID_RESERVATION_FILE = "/var/run/pve-reserved-pciids";
+my $PCIID_RESERVATION_LCK = "/var/lock/pve-reserved-pciids.lck";
+
+my $parse_pci_reservation = sub {
+my $pciids = {};
+
+if (my $fh = IO::File->new ($PCIID_RESERVATION_FILE, "r")) {
+   while (my $line = <$fh>) {
+   if ($line =~ m/^($PCIRE)\s(\d+)\s(\d+)$/) {
+   $pciids->{$1} = {
+   timestamp => $2,
+   vmid => $3,
+   };
+   }
+   }
+}
+
+return $pciids;
+};
+
+my $write_pci_reservation = sub {
+my ($pciids) = @_;
+
+my $data = "";
+foreach my $p (keys %$pciids) {
+   $data .= "$p $pciids->{$p}->{timestamp} $pciids->{$p}->{vmid}\n";
+}
+
+PVE::Tools::file_set_contents($PCIID_RESERVATION_FILE, $data);
+};
+
+sub remove_pci_reservation {
+my ($id) = @_;
+
+my $code = sub {
+   my $pciids = $parse_pci_reservation->();
+
+   delete $pciids->{$id};
+
+   $write_pci_reservation->($pciids);
+};
+
+PVE::Tools::lock_file($PCIID_RESERVATION_LCK, 10, $code);
+die $@ if $@;
+
+return;
+}
+
+sub reserve_pci_usage {
+my ($id, $vmid) = @_;
+
+my $code = sub {
+
+   # have a 60 second grace period, since we reserve before
+   # we actually start the vm
+   my $graceperiod = 60;
+   my $ctime = time();
+
+   my $pciids = $parse_pci_reservation->();
+
+   if (my $pciid = $pciids->{$id}) {
+   if ($pciid->{vmid} == $vmid) {
+   return; # already reserved
+   }
+
+   if (($pciid->{timestamp} + $graceperiod > $ctime) ||
+   PVE::QemuServer::Helpers::vm_running_locally($vmid))
+   {
+   die "PCI device '$id' already in use\n";
+   }
+   }
+
+   $pciids->{$id} = {
+   timestamp => $ctime,
+   vmid => $vmid,
+   };
+
+   $write_pci_reservation->($pciids);
+
+   return;
+};
+
+PVE::Tools::lock_file($PCIID_RESERVATION_LCK, 10, $code);
+die $@ if $@;
+
+return;
+}
+
 1;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 2/3] pci: add helpers to (un)reserve pciids for a vm

2021-10-05 Thread Thomas Lamprecht
On 05.10.21 15:11, Dominik Csapak wrote:
> saves a list of pciid <-> vmid mappings in /var/run
> that we can check when we start a vm

a few style nits but also one serious note inline

> 
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/QemuServer/PCI.pm | 89 +++
>  1 file changed, 89 insertions(+)
> 
> diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
> index 5608207..0626b76 100644
> --- a/PVE/QemuServer/PCI.pm
> +++ b/PVE/QemuServer/PCI.pm
> @@ -5,6 +5,7 @@ use strict;
>  
>  use PVE::JSONSchema;
>  use PVE::SysFSTools;
> +use PVE::Tools;
>  
>  use base 'Exporter';
>  
> @@ -461,4 +462,92 @@ sub print_hostpci_devices {
>  return ($kvm_off, $gpu_passthrough, $legacy_igd);
>  }
>  
> +my $PCIID_RESERVATION_FILE = "/var/run/pve-reserved-pciids";
> +my $PCIID_RESERVATION_LCK = "/var/lock/pve-reserved-pciids.lck";
> +
> +my $parse_pci_reservation = sub {
> +my $pciids = {};
> +
> +if (my $fh = IO::File->new ($PCIID_RESERVATION_FILE, "r")) {
> + while (my $line = <$fh>) {
> + if ($line =~ m/^($PCIRE)\s(\d+)\s(\d+)$/) {
> + $pciids->{$1} = {
> + timestamp => $2,
> + vmid => $3,
> + };
> + }
> + }
> +}
> +
> +return $pciids;
> +};
> +
> +my $write_pci_reservation = sub {
> +my ($pciids) = @_;
> +
> +my $data = "";
> +foreach my $p (keys %$pciids) {

prefer for over foreach

> + $data .= "$p $pciids->{$p}->{timestamp} $pciids->{$p}->{vmid}\n";
> +}

my $data = join("\n", map { "$_ $pciids->{$_}->{timestamp} 
$pciids->{$_}->{vmid}" }, keys $pciids->%*);

> +
> +PVE::Tools::file_set_contents($PCIID_RESERVATION_FILE, $data);
> +};
> +
> +sub remove_pci_reservation {
> +my ($id) = @_;
> +
> +my $code = sub {
> + my $pciids = $parse_pci_reservation->();
> +
> + delete $pciids->{$id};
> +
> + $write_pci_reservation->($pciids);
> +};
> +
> +PVE::Tools::lock_file($PCIID_RESERVATION_LCK, 10, $code);

IMO it has some benefit for passing the closure directly, less lines and 
slightly
more obvious about the locking (as at least I read methods from top to bottom):

PVE::Tools::lock_file($PCIID_RESERVATION_LCK, 10, sub {
my $pciids = $parse_pci_reservation->();
...
});

but we have no clear style guide regarding this and def. use both variants, so 
no
hard feelings here.

> +die $@ if $@;
> +
> +return;
> +}
> +
> +sub reserve_pci_usage {
> +my ($id, $vmid) = @_;
> +
> +my $code = sub {
> +
> + # have a 60 second grace period, since we reserve before
> + # we actually start the vm

huh, whats the use on that? so I can "steal" PCI devices the first 60s, feels 
weird...

Why not either:
* catch any start error somewhere centrally and clear the reservation in that
  case again, a kill/crash could still result into false-positives though
* save timestamp now and then once we know it the PID of the VM as third
  param, VMID + PID are quite good in being resistent against PID-reuse
  and an future start could check if the process still lives to decide
  if the reservation is still valid

> + my $graceperiod = 60;
> + my $ctime = time();
> +
> + my $pciids = $parse_pci_reservation->();
> +
> + if (my $pciid = $pciids->{$id}) {
> + if ($pciid->{vmid} == $vmid) {
> + return; # already reserved
> + }

I'd prefer a onliner for above, less lines/noise while not yet being
code-golfy, so easier to read IMO. i.e.:

return if $pciid->{vmid} == $vmid; # already reserved

> +
> + if (($pciid->{timestamp} + $graceperiod > $ctime) ||
> + PVE::QemuServer::Helpers::vm_running_locally($vmid))
> + {

style nit², we (nowadays) normally place the if's closing ) also on the new
line:

if (($pciid->{timestamp} + $graceperiod > $ctime) ||
PVE::QemuServer::Helpers::vm_running_locally($vmid)
) {

}

honestly I'd like it 1000% more the rust way, but well..


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server 1/3] pci: to not capture first group in PCIRE

2021-10-05 Thread Thomas Lamprecht
On 05.10.21 15:11, Dominik Csapak wrote:
> we do not need this group, but want to use the regex where we have
> multiple groups, so make it a non-capture group
> 
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/QemuServer/PCI.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server] ovmf: support secure boot with 4m and 4m-ms efidisk types

2021-10-05 Thread Stefan Reiter
Provide support for secure boot by using the new "4m" and "4m-ms"
variants of the OVMF code/vars templates. This is specified on the
efidisk via the 'efitype' and 'ms-keys' parameters.

Signed-off-by: Stefan Reiter 
---

Should depend on updated pve-edk2-firmware.

 PVE/API2/Qemu.pm|  3 ++-
 PVE/QemuServer.pm   | 60 -
 PVE/QemuServer/Drive.pm | 22 +++
 3 files changed, 65 insertions(+), 20 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 367d6ca..cc2a543 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -183,7 +183,8 @@ my $create_disks = sub {
 
my $volid;
if ($ds eq 'efidisk0') {
-   ($volid, $size) = PVE::QemuServer::create_efidisk($storecfg, 
$storeid, $vmid, $fmt, $arch);
+   ($volid, $size) = PVE::QemuServer::create_efidisk(
+   $storecfg, $storeid, $vmid, $fmt, $arch, $disk);
} elsif ($ds eq 'tpmstate0') {
# swtpm can only use raw volumes, and uses a fixed size
$size = 
PVE::Tools::convert_size(PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE, 'b' => 
'kb');
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 076ce59..3c0ecf5 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -63,14 +63,26 @@ eval {
 
 my $EDK2_FW_BASE = '/usr/share/pve-edk2-firmware/';
 my $OVMF = {
-x86_64 => [
-   "$EDK2_FW_BASE/OVMF_CODE.fd",
-   "$EDK2_FW_BASE/OVMF_VARS.fd"
-],
-aarch64 => [
-   "$EDK2_FW_BASE/AAVMF_CODE.fd",
-   "$EDK2_FW_BASE/AAVMF_VARS.fd"
-],
+x86_64 => {
+   '4m' => [
+   "$EDK2_FW_BASE/OVMF_CODE_4M.secboot.fd",
+   "$EDK2_FW_BASE/OVMF_VARS_4M.fd",
+   ],
+   '4m-ms' => [
+   "$EDK2_FW_BASE/OVMF_CODE_4M.secboot.fd",
+   "$EDK2_FW_BASE/OVMF_VARS_4M.ms.fd",
+   ],
+   default => [
+   "$EDK2_FW_BASE/OVMF_CODE.fd",
+   "$EDK2_FW_BASE/OVMF_VARS.fd",
+   ],
+},
+aarch64 => {
+   default => [
+   "$EDK2_FW_BASE/AAVMF_CODE.fd",
+   "$EDK2_FW_BASE/AAVMF_VARS.fd",
+   ],
+},
 };
 
 my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
@@ -3140,13 +3152,18 @@ sub get_vm_machine {
 return $machine;
 }
 
-sub get_ovmf_files($) {
-my ($arch) = @_;
+sub get_ovmf_files($$) {
+my ($arch, $efidisk) = @_;
 
-my $ovmf = $OVMF->{$arch}
+my $types = $OVMF->{$arch}
or die "no OVMF images known for architecture '$arch'\n";
 
-return @$ovmf;
+my $type = 'default';
+if (defined($efidisk->{efitype}) && $efidisk->{efitype} eq '4m') {
+   $type = $efidisk->{'ms-keys'} ? "4m-ms" : "4m";
+}
+
+return $types->{$type}->@*;
 }
 
 my $Arch2Qemu = {
@@ -3405,13 +3422,17 @@ sub config_to_command {
 }
 
 if ($conf->{bios} && $conf->{bios} eq 'ovmf') {
-   my ($ovmf_code, $ovmf_vars) = get_ovmf_files($arch);
+   my $d;
+   if (my $efidisk = $conf->{efidisk0}) {
+   $d = parse_drive('efidisk0', $efidisk);
+   }
+
+   my ($ovmf_code, $ovmf_vars) = get_ovmf_files($arch, $d);
die "uefi base image '$ovmf_code' not found\n" if ! -f $ovmf_code;
 
my ($path, $format);
my $read_only_str = '';
-   if (my $efidisk = $conf->{efidisk0}) {
-   my $d = parse_drive('efidisk0', $efidisk);
+   if ($d) {
my ($storeid, $volname) = PVE::Storage::parse_volume_id($d->{file}, 
1);
$format = $d->{format};
if ($storeid) {
@@ -7516,7 +7537,8 @@ sub qemu_use_old_bios_files {
 sub get_efivars_size {
 my ($conf) = @_;
 my $arch = get_vm_arch($conf);
-my (undef, $ovmf_vars) = get_ovmf_files($arch);
+my $efidisk = $conf->{efidisk0} ? parse_drive('efidisk0', 
$conf->{efidisk0}) : undef;
+my (undef, $ovmf_vars) = get_ovmf_files($arch, $efidisk);
 die "uefi vars image '$ovmf_vars' not found\n" if ! -f $ovmf_vars;
 return -s $ovmf_vars;
 }
@@ -7541,10 +7563,10 @@ sub update_tpmstate_size {
 $conf->{tpmstate0} = print_drive($disk);
 }
 
-sub create_efidisk($) {
-my ($storecfg, $storeid, $vmid, $fmt, $arch) = @_;
+sub create_efidisk($$) {
+my ($storecfg, $storeid, $vmid, $fmt, $arch, $efidisk) = @_;
 
-my (undef, $ovmf_vars) = get_ovmf_files($arch);
+my (undef, $ovmf_vars) = get_ovmf_files($arch, $efidisk);
 die "EFI vars default image not found\n" if ! -f $ovmf_vars;
 
 my $vars_size_b = -s $ovmf_vars;
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 6389dbb..57d26f5 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -306,6 +306,26 @@ my $virtiodesc = {
 };
 PVE::JSONSchema::register_standard_option("pve-qm-virtio", $virtiodesc);
 
+my %efitype_fmt = (
+efitype => {
+   type => 'string',
+   enum => [qw(2m 4m)],
+   description => "Size and type of the OVMF EFI vars. '4m' is newer and 
recommended,"
+   . " and required for Secure Boot. For backwards compatib

[pve-devel] applied: [PATCH qemu-server] ovmf: support secure boot with 4m and 4m-ms efidisk types

2021-10-05 Thread Thomas Lamprecht
On 05.10.21 18:02, Stefan Reiter wrote:
> Provide support for secure boot by using the new "4m" and "4m-ms"
> variants of the OVMF code/vars templates. This is specified on the
> efidisk via the 'efitype' and 'ms-keys' parameters.
> 
> Signed-off-by: Stefan Reiter 
> ---
> 
> Should depend on updated pve-edk2-firmware.
> 
>  PVE/API2/Qemu.pm|  3 ++-
>  PVE/QemuServer.pm   | 60 -
>  PVE/QemuServer/Drive.pm | 22 +++
>  3 files changed, 65 insertions(+), 20 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] trousers error in latest test repo?

2021-10-05 Thread Victor Hooi
Hi,

I just installed a new box (Intel NUC7i5BNK) with Proxmox 7.0-2, and then
ran an apt update/dist-upgrade, using the PVE test repos:

There seems to be some error whilst upgrading some "trousers" module:

Setting up trousers (0.3.14+fixed1-1.2) ...
> Job for trousers.service failed because the control process exited with
> error code.
> See "systemctl status trousers.service" and "journalctl -xe" for details.
> invoke-rc.d: initscript trousers, action "start" failed.
> ● trousers.service - LSB: starts tcsd
>  Loaded: loaded (/etc/init.d/trousers; generated)
>  Active: failed (Result: exit-code) since Tue 2021-10-05 15:43:15 PDT;
> 4ms ago
>Docs: man:systemd-sysv-generator(8)
> Process: 32424 ExecStart=/etc/init.d/trousers start (code=exited,
> status=30)
> CPU: 10ms


Oct 05 15:43:15 mgmt1 systemd[1]: Starting LSB: starts tcsd...
> Oct 05 15:43:15 mgmt1 trousers[32424]: Starting Trusted Computing daemon:
> tcsd/etc/init.d/trousers: 32: [: /dev/tpm0: unexpected operator
> Oct 05 15:43:15 mgmt1 TCSD[32429]: TrouSerS resetting mode of /var/lib/tpm
> from 40755 to: 700
> Oct 05 15:43:15 mgmt1 tcsd[32429]: TCSD TDDL[32429]: TrouSerS ioctl: (25)
> Inappropriate ioctl for device
> Oct 05 15:43:15 mgmt1 tcsd[32429]: TCSD TDDL[32429]: TrouSerS Falling back
> to Read/Write device support.
> Oct 05 15:43:15 mgmt1 tcsd[32429]: TCSD TCS[32429]: TrouSerS ERROR: TCS
> GetCapability failed with result = 0x1e
> Oct 05 15:43:15 mgmt1 trousers[32430]:  failed!
> Oct 05 15:43:15 mgmt1 systemd[1]: trousers.service: Control process
> exited, code=exited, status=30/n/a
> Oct 05 15:43:15 mgmt1 systemd[1]: trousers.service: Failed with result
> 'exit-code'.
> Oct 05 15:43:15 mgmt1 systemd[1]: Failed to start LSB: starts tcsd.
> dpkg: error processing package trousers (--configure):
>  installed trousers package post-installation script subprocess returned
> error exit status 1


However, this also then causes issues when attempting to finalise other
packages as well:

dpkg: dependency problems prevent configuration of swtpm-tools:
>  swtpm-tools depends on trousers (>= 0.3.9); however:
>   Package trousers is not configured yet.


dpkg: error processing package swtpm-tools (--configure):
>  dependency problems - leaving unconfigured
> Setting up swtpm (0.6.99+1) ...
> Setting up pve-firewall (4.2-3) ...
> Setting up libzpool5linux (2.1.1-pve1) ...
> Setting up gnutls-bin (3.7.1-5) ...
> Setting up corosync (3.1.5-pve1) ...
> dpkg: dependency problems prevent configuration of qemu-server:
>  qemu-server depends on swtpm-tools; however:
>   Package swtpm-tools is not configured yet.


dpkg: error processing package qemu-server (--configure):
>  dependency problems - leaving unconfigured
> Setting up zfsutils-linux (2.1.1-pve1) ...
> Installing new version of config file /etc/zfs/zfs-functions ...
> Setting up zfs-initramfs (2.1.1-pve1) ...
> dpkg: dependency problems prevent configuration of pve-manager:
>  pve-manager depends on qemu-server (>= 6.2-17); however:
>   Package qemu-server is not configured yet.


dpkg: error processing package pve-manager (--configure):
>  dependency problems - leaving unconfigured
> Setting up zfs-zed (2.1.1-pve1) ...
> Installing new version of config file /etc/zfs/zed.d/zed-functions.sh ...
> dpkg: dependency problems prevent processing triggers for pve-ha-manager:
>  pve-ha-manager depends on qemu-server (>= 6.0-15); however:
>   Package qemu-server is not configured yet.


dpkg: error processing package pve-ha-manager (--configure):
>  dependency problems - leaving triggers unprocessed
> Processing triggers for initramfs-tools (0.140) ...
> update-initramfs: Generating /boot/initrd.img-5.11.22-5-pve
> Running hook script 'zz-proxmox-boot'..
> Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount
> namespace..
> Copying and configuring kernels on /dev/disk/by-uuid/5994-C1A0
> Copying kernel and creating boot-entry for 5.11.22-4-pve
> Copying kernel and creating boot-entry for 5.11.22-5-pve
> Processing triggers for libc-bin (2.31-13) ...
> Processing triggers for man-db (2.9.4-2) ...
> Processing triggers for mailcap (3.69) ...
> Errors were encountered while processing:
>  trousers
>  swtpm-tools
>  qemu-server
>  pve-manager
>  pve-ha-manager
> E: Sub-process /usr/bin/dpkg returned an error code (1)


Does anybody know what this is?

Is it related to this? (Apparently an error in the init.d script - although
I'm not sure where this script comes from)

https://unix.stackexchange.com/questions/633563/can-not-start-trousers-service-giving-error-trousers-ioctl-25-inappropriat

Thanks,
Vic
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel