[pve-devel] Vmbr bridge permissions and SDN improvements?

2022-03-07 Thread Neil Hawker
Hi,

We're currently using version 7.1-10 and have the use case where we need to 
hide the vmbr bridges from normal users to prevent them circumventing network 
security that is applied through SDN vNets.

For context, our setup is a Proxmox cluster that is used as a learning 
environment for students where they can create and manage their own VMs to 
practice their Cybersecurity skills in an isolated environment. Being able to 
hide the vmbr bridges from users would achieve this.

I have found on the community forum 
(https://forum.proxmox.com/threads/sdn-group-pool-permissions.93872) that 
Spirit had contributed changes that have yet to be accepted/merged in that 
would achieve this as well as some SDN GUI improvements.

I appreciate developers are very busy, but is it possible for Sprit's changes 
to be included in an upcoming version and if so, any rough idea when they might 
get released?

Thanks
Neil
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] Vmbr bridge permissions and SDN improvements?

2022-03-07 Thread Eneko Lacunza via pve-devel
--- Begin Message ---

Hi Neil,

Have you considered using nested Proxmox servers, so that you only have 
the desired networks in students' nested Promoxes?


Cheers

El 4/3/22 a las 12:08, Neil Hawker escribió:

Hi,

We're currently using version 7.1-10 and have the use case where we need to 
hide the vmbr bridges from normal users to prevent them circumventing network 
security that is applied through SDN vNets.

For context, our setup is a Proxmox cluster that is used as a learning 
environment for students where they can create and manage their own VMs to 
practice their Cybersecurity skills in an isolated environment. Being able to 
hide the vmbr bridges from users would achieve this.

I have found on the community forum 
(https://forum.proxmox.com/threads/sdn-group-pool-permissions.93872) that 
Spirit had contributed changes that have yet to be accepted/merged in that 
would achieve this as well as some SDN GUI improvements.

I appreciate developers are very busy, but is it possible for Sprit's changes 
to be included in an upcoming version and if so, any rough idea when they might 
get released?

Thanks
Neil
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-container 1/2] pct: set worker user for pull_file/push_file calls

2022-03-07 Thread Oguz Bektas
was previously unset, causing a 'root@pve' to show up in the task logs
instead of the regular 'root@pam'.

Signed-off-by: Oguz Bektas 
---
 src/PVE/CLI/pct.pm | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 462917b..99c160c 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -544,6 +544,8 @@ __PACKAGE__->register_method({
 
my $rpcenv = PVE::RPCEnvironment::get();
 
+   my $authuser = $rpcenv->get_user();
+
my $vmid = extract_param($param, 'vmid');
my $path = extract_param($param, 'path');
my $dest = extract_param($param, 'destination');
@@ -578,7 +580,7 @@ __PACKAGE__->register_method({
};
 
# This avoids having to setns() back to our namespace.
-   return $rpcenv->fork_worker('pull_file', $vmid, undef, $realcmd);
+   return $rpcenv->fork_worker('pull_file', $vmid, $authuser, 
$realcmd);
};
 
return PVE::LXC::Config->lock_config($vmid, $code);
@@ -627,6 +629,8 @@ __PACKAGE__->register_method({
 
my $rpcenv = PVE::RPCEnvironment::get();
 
+   my $authuser = $rpcenv->get_user();
+
my $vmid = extract_param($param, 'vmid');
my $file = extract_param($param, 'file');
my $dest = extract_param($param, 'destination');
@@ -682,7 +686,7 @@ __PACKAGE__->register_method({
};
 
# This avoids having to setns() back to our namespace.
-   return $rpcenv->fork_worker('push_file', $vmid, undef, $realcmd);
+   return $rpcenv->fork_worker('push_file', $vmid, $authuser, 
$realcmd);
};
 
return PVE::LXC::Config->lock_config($vmid, $code);
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH common 2/2] REST environment: default to 'root@pam' for forked workers in case no user was specified

2022-03-07 Thread Oguz Bektas
previously we had a default of 'root@pve', which doesn't exist.
since the username is only relevant for the task logs, we can change it
to 'root@pam' without ill effects.

also add a warning in case there are other call sites that we missed
where fork_worker is called without a user variable (found call sites
only in pve-container where this was unset, namely in 'push_file' and
'pull_file').

Signed-off-by: Oguz Bektas 
---
 src/PVE/RESTEnvironment.pm | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/src/PVE/RESTEnvironment.pm b/src/PVE/RESTEnvironment.pm
index 1b2af08..5352aad 100644
--- a/src/PVE/RESTEnvironment.pm
+++ b/src/PVE/RESTEnvironment.pm
@@ -492,7 +492,10 @@ sub fork_worker {
 $dtype = 'unknown' if !defined ($dtype);
 $id = '' if !defined ($id);
 
-$user = 'root@pve' if !defined ($user);
+if (!defined($user)) {
+   warn 'Worker user was not specified, defaulting to "root@pam"!';
+   $user = 'root@pam';
+}
 
 my $sync = ($self->{type} eq 'cli' && !$background) ? 1 : 0;
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 manager 0/4] ui: lxc/qemu: add reassign for disks and volumes

2022-03-07 Thread Aaron Lauterer
This series adds the UI to reassign a disk / volume from one guest to another.

To avoid button clutter, the Move, Reassing and Resize buttons are moved
into a new submenu called "Disk/Volume Action".

Patch 2 to 4 are optional. Patch 2 changes the labels for Move, Reassign
and Resize to remove Volume & Disk as we already have this in the
context of the submenu.

Patch 3 only changes a double negated option and patch 4 happend in the
process of working on an interface for the reassign functionality. Since
the work of modernizing this componend is done, why not use it

v3:
* change to Edit window, removing quite some boilerplate code
* create new submenu for disk/volume actions
* incorporate smaller style nits
* simplify other labels as well, removing 'Volume' and 'Disk' as the
  context gives that away already

v2: incorporated feedback I got off list, mainly
* using more modern approaches
* more arrow functions
* reducing use of predefined cbind values and using inline functions
  when possible

Aaron Lauterer (4):
  ui: lxc/qemu: add disk reassign and action submenu
  ui: lxc/qemu: disk/volume action simplify menu items
  ui: BusTypeSelector: change noVirtIO to withVirtIO
  ui: hdmove: modernize/refactor

 www/manager6/Makefile   |   1 +
 www/manager6/form/BusTypeSelector.js|   4 +-
 www/manager6/form/ControllerSelector.js |   4 +-
 www/manager6/lxc/Resources.js   |  66 --
 www/manager6/qemu/CDEdit.js |   2 +-
 www/manager6/qemu/CIDriveEdit.js|   2 +-
 www/manager6/qemu/HDMove.js | 192 -
 www/manager6/qemu/HDReassign.js | 274 
 www/manager6/qemu/HardwareView.js   |  65 +-
 9 files changed, 476 insertions(+), 134 deletions(-)
 create mode 100644 www/manager6/qemu/HDReassign.js

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 manager 2/4] ui: lxc/qemu: disk/volume action simplify menu items

2022-03-07 Thread Aaron Lauterer
We already know that we are acting upon a disk / volume due to the
submenu we are in.

Signed-off-by: Aaron Lauterer 
---
 www/manager6/lxc/Resources.js | 6 +++---
 www/manager6/qemu/HardwareView.js | 6 +++---
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/www/manager6/lxc/Resources.js b/www/manager6/lxc/Resources.js
index 306a4988..a5df9182 100644
--- a/www/manager6/lxc/Resources.js
+++ b/www/manager6/lxc/Resources.js
@@ -202,7 +202,7 @@ Ext.define('PVE.lxc.RessourceView', {
});
 
var resize_menuitem = new Ext.menu.Item({
-   text: gettext('Resize disk'),
+   text: gettext('Resize'),
selModel: me.selModel,
disabled: true,
handler: run_resize,
@@ -247,7 +247,7 @@ Ext.define('PVE.lxc.RessourceView', {
});
 
let reassign_menuitem = new Ext.menu.Item({
-   text: gettext('Reassign Volume'),
+   text: gettext('Reassign'),
tooltip: gettext('Reassign volume to another CT'),
handler: run_reassign,
reference: 'reassing_item',
@@ -255,7 +255,7 @@ Ext.define('PVE.lxc.RessourceView', {
});
 
let move_menuitem = new Ext.menu.Item({
-   text: gettext('Move Volume'),
+   text: gettext('Move'),
selModel: me.selModel,
disabled: true,
handler: run_move,
diff --git a/www/manager6/qemu/HardwareView.js 
b/www/manager6/qemu/HardwareView.js
index 1f19269c..21f67cd0 100644
--- a/www/manager6/qemu/HardwareView.js
+++ b/www/manager6/qemu/HardwareView.js
@@ -443,20 +443,20 @@ Ext.define('PVE.qemu.HardwareView', {
 });
 
var resize_menuitem = new Ext.menu.Item({
-   text: gettext('Resize disk'),
+   text: gettext('Resize'),
selModel: sm,
disabled: true,
handler: run_resize,
});
 
let move_menuitem = new Ext.menu.Item({
-   text: gettext('Move disk'),
+   text: gettext('Move'),
selModel: sm,
handler: run_move,
});
 
let reassign_menuitem = new Ext.menu.Item({
-   text: gettext('Reassign Disk'),
+   text: gettext('Reassign'),
tooltip: gettext('Reassign disk to another VM'),
handler: run_reassign,
selModel: sm,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 manager 3/4] ui: BusTypeSelector: change noVirtIO to withVirtIO

2022-03-07 Thread Aaron Lauterer
Double negated properties make it harder than necessary to parse
conditions.

Signed-off-by: Aaron Lauterer 
---
 www/manager6/form/BusTypeSelector.js| 4 ++--
 www/manager6/form/ControllerSelector.js | 4 ++--
 www/manager6/qemu/CDEdit.js | 2 +-
 www/manager6/qemu/CIDriveEdit.js| 2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/www/manager6/form/BusTypeSelector.js 
b/www/manager6/form/BusTypeSelector.js
index a420e56f..0f040229 100644
--- a/www/manager6/form/BusTypeSelector.js
+++ b/www/manager6/form/BusTypeSelector.js
@@ -2,7 +2,7 @@ Ext.define('PVE.form.BusTypeSelector', {
 extend: 'Proxmox.form.KVComboBox',
 alias: 'widget.pveBusSelector',
 
-noVirtIO: false,
+withVirtIO: true,
 withUnused: false,
 
 initComponent: function() {
@@ -10,7 +10,7 @@ Ext.define('PVE.form.BusTypeSelector', {
 
me.comboItems = [['ide', 'IDE'], ['sata', 'SATA']];
 
-   if (!me.noVirtIO) {
+   if (me.withVirtIO) {
me.comboItems.push(['virtio', 'VirtIO Block']);
}
 
diff --git a/www/manager6/form/ControllerSelector.js 
b/www/manager6/form/ControllerSelector.js
index 798dc4b2..d84c49d6 100644
--- a/www/manager6/form/ControllerSelector.js
+++ b/www/manager6/form/ControllerSelector.js
@@ -2,7 +2,7 @@ Ext.define('PVE.form.ControllerSelector', {
 extend: 'Ext.form.FieldContainer',
 alias: 'widget.pveControllerSelector',
 
-noVirtIO: false,
+withVirtIO: true,
 withUnused: false,
 
 vmconfig: {}, // used to check for existing devices
@@ -73,7 +73,7 @@ Ext.define('PVE.form.ControllerSelector', {
name: 'controller',
itemId: 'controller',
value: PVE.qemu.OSDefaults.generic.busType,
-   noVirtIO: me.noVirtIO,
+   withVirtIO: me.withVirtIO,
withUnused: me.withUnused,
allowBlank: false,
flex: 2,
diff --git a/www/manager6/qemu/CDEdit.js b/www/manager6/qemu/CDEdit.js
index 72c01037..fc7a59cc 100644
--- a/www/manager6/qemu/CDEdit.js
+++ b/www/manager6/qemu/CDEdit.js
@@ -71,7 +71,7 @@ Ext.define('PVE.qemu.CDInputPanel', {
 
if (!me.confid) {
me.bussel = Ext.create('PVE.form.ControllerSelector', {
-   noVirtIO: true,
+   withVirtIO: false,
});
items.push(me.bussel);
}
diff --git a/www/manager6/qemu/CIDriveEdit.js b/www/manager6/qemu/CIDriveEdit.js
index 754b8353..a9ca8bf1 100644
--- a/www/manager6/qemu/CIDriveEdit.js
+++ b/www/manager6/qemu/CIDriveEdit.js
@@ -36,7 +36,7 @@ Ext.define('PVE.qemu.CIDriveInputPanel', {
me.items = [
{
xtype: 'pveControllerSelector',
-   noVirtIO: true,
+   withVirtIO: false,
itemId: 'drive',
fieldLabel: gettext('CloudInit Drive'),
name: 'drive',
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 manager 4/4] ui: hdmove: modernize/refactor

2022-03-07 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
changes since

v2:
* switch from generic window to proxmox edit

v1: much of the feedback to the HDReassign.js from the
first patch has been incorporated here as well.

* reducing predefined cbind values for more arrow functions
* using more arrow functions in general
* template strings
 www/manager6/qemu/HDMove.js   | 192 ++
 www/manager6/qemu/HardwareView.js |   1 +
 2 files changed, 88 insertions(+), 105 deletions(-)

diff --git a/www/manager6/qemu/HDMove.js b/www/manager6/qemu/HDMove.js
index 181b7bdc..3e94479c 100644
--- a/www/manager6/qemu/HDMove.js
+++ b/www/manager6/qemu/HDMove.js
@@ -1,48 +1,102 @@
 Ext.define('PVE.window.HDMove', {
-extend: 'Ext.window.Window',
+extend: 'Proxmox.window.Edit',
+mixins: ['Proxmox.Mixin.CBind'],
 
 resizable: false,
+modal: true,
+width: 350,
+border: false,
+layout: 'fit',
+showReset: false,
+showProgress: true,
+method: 'POST',
+
+cbindData: function() {
+   let me = this;
+   return {
+   disk: me.disk,
+   isQemu: me.type === 'qemu',
+   nodename: me.nodename,
+   url: `/nodes/${me.nodename}/${me.type}/${me.vmid}/`,
+   };
+},
+
+cbind: {
+   title: get => get('isQemu') ? gettext("Move disk") : gettext('Move 
Volume'),
+   submitText: get => get('title'),
+   qemu: '{isQemu}',
+   url: '{url}',
+},
+
+submitUrl: function(url, values) {
+   url += this.qemu ? 'move_disk' : 'move_volume';
+   return url;
+},
 
+getValues: function() {
+   let me = this;
+   let values = me.formPanel.getForm().getValues();
 
-move_disk: function(disk, storage, format, delete_disk) {
-   var me = this;
-   var qemu = me.type === 'qemu';
-   var params = {};
-   params.storage = storage;
-   params[qemu ? 'disk':'volume'] = disk;
+   let params = {
+   storage: values.hdstorage,
+   };
+   params[me.qemu ? 'disk':'volume'] = me.disk;
 
-   if (format && qemu) {
-   params.format = format;
+   if (values.diskformat && me.qemu) {
+   params.format = values.diskformat;
}
 
-   if (delete_disk) {
+   if (values.deleteDisk) {
params.delete = 1;
}
-
-   var url = '/nodes/' + me.nodename + '/' + me.type + '/' + me.vmid + '/';
-   url += qemu ? 'move_disk' : 'move_volume';
-
-   Proxmox.Utils.API2Request({
-   params: params,
-   url: url,
-   waitMsgTarget: me,
-   method: 'POST',
-   failure: function(response, opts) {
-   Ext.Msg.alert('Error', response.htmlStatus);
-   },
-   success: function(response, options) {
-   var upid = response.result.data;
-   var win = Ext.create('Proxmox.window.TaskViewer', {
-   upid: upid,
-   });
-   win.show();
-   win.on('destroy', function() { me.close(); });
-   },
-   });
+   return params;
 },
+items: [
+   {
+   xtype: 'form',
+   reference: 'moveFormPanel',
+   bodyPadding: 10,
+   border: false,
+   fieldDefaults: {
+   labelWidth: 100,
+   anchor: '100%',
+   },
+   items: [
+   {
+   xtype: 'displayfield',
+   cbind: {
+   name: get => get('isQemu') ? 'disk' : 'volume',
+   fieldLabel: get => get('isQemu')
+   ? gettext('Disk')
+   : gettext('Mount Point'),
+   value: '{disk}',
+   },
+   vtype: 'StorageId',
+   allowBlank: false,
+   },
+   {
+   xtype: 'pveDiskStorageSelector',
+   storageLabel: gettext('Target Storage'),
+   cbind: {
+   nodename: '{nodename}',
+   storageContent: get => get('isQemu') ? 'images' : 
'rootdir',
+   hideFormat: get => get('disk') === 'tpmstate0',
+   },
+   hideSize: true,
+   },
+   {
+   xtype: 'proxmoxcheckbox',
+   fieldLabel: gettext('Delete source'),
+   name: 'deleteDisk',
+   uncheckedValue: 0,
+   checked: false,
+   },
+   ],
+   },
+],
 
 initComponent: function() {
-   var me = this;
+   let me = this;
 
if (!me.nodename) {
throw "no node name specified";
@@ -53,81 +107,9 @@ Ext.define('PVE.window.HDMove', {
}
 
if (!me.type) {
-   me.type = 'qemu';
+   throw "no type specified";
}
 
-   var qemu = me.type === 'qemu';
-
-var items = [
-{
-

[pve-devel] [PATCH v3 manager 1/4] ui: lxc/qemu: add disk reassign and action submenu

2022-03-07 Thread Aaron Lauterer
For the new HDReassign component, we follow the approach of HDMove to
have one componend for qemu and lxc.

To avoid button clutter, a new "Disk/Volume action" button is
introduced. It holds the Move, Reassign and Resize buttons in a submenu.

Signed-off-by: Aaron Lauterer 
---
changes since

v2:
* switch from generic to Proxmox Edit window
* add new submenu for disk/volume specific actions
* code style improvements
* simplify some labels, removing "disk" and "volume" as the context
  already gives this away


v1: incorporated feedback I got off list

* use more modern approaches
* arrow functions
* autoShow
* template strings
* reduce predefined cbind values and use arrow functions in the cbind
  directly in many cases
* some code style issues and cleanup

 www/manager6/Makefile |   1 +
 www/manager6/lxc/Resources.js |  62 +--
 www/manager6/qemu/HDReassign.js   | 274 ++
 www/manager6/qemu/HardwareView.js |  60 ++-
 4 files changed, 378 insertions(+), 19 deletions(-)
 create mode 100644 www/manager6/qemu/HDReassign.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index e6e01bd1..94a78d89 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -214,6 +214,7 @@ JSSRC=  
\
qemu/HDTPM.js   \
qemu/HDMove.js  \
qemu/HDResize.js\
+   qemu/HDReassign.js  \
qemu/HardwareView.js\
qemu/IPConfigEdit.js\
qemu/KeyboardEdit.js\
diff --git a/www/manager6/lxc/Resources.js b/www/manager6/lxc/Resources.js
index 15ee3c67..306a4988 100644
--- a/www/manager6/lxc/Resources.js
+++ b/www/manager6/lxc/Resources.js
@@ -151,7 +151,8 @@ Ext.define('PVE.lxc.RessourceView', {
});
};
 
-   var run_move = function(b, e, rec) {
+   var run_move = function() {
+   let rec = me.selModel.getSelection()[0];
if (!rec) {
return;
}
@@ -168,6 +169,24 @@ Ext.define('PVE.lxc.RessourceView', {
win.on('destroy', me.reload, me);
};
 
+   let run_reassign = function() {
+   let rec = me.selModel.getSelection()[0];
+   if (!rec) {
+   return;
+   }
+
+   Ext.create('PVE.window.HDReassign', {
+   disk: rec.data.key,
+   nodename: nodename,
+   autoShow: true,
+   vmid: vmid,
+   type: 'lxc',
+   listeners: {
+   destroy: () => me.reload(),
+   },
+   });
+   };
+
var edit_btn = new Proxmox.button.Button({
text: gettext('Edit'),
selModel: me.selModel,
@@ -182,7 +201,7 @@ Ext.define('PVE.lxc.RessourceView', {
handler: function() { me.run_editor(); },
});
 
-   var resize_btn = new Proxmox.button.Button({
+   var resize_menuitem = new Ext.menu.Item({
text: gettext('Resize disk'),
selModel: me.selModel,
disabled: true,
@@ -227,14 +246,34 @@ Ext.define('PVE.lxc.RessourceView', {
},
});
 
-   var move_btn = new Proxmox.button.Button({
+   let reassign_menuitem = new Ext.menu.Item({
+   text: gettext('Reassign Volume'),
+   tooltip: gettext('Reassign volume to another CT'),
+   handler: run_reassign,
+   reference: 'reassing_item',
+   disabled: true,
+   });
+
+   let move_menuitem = new Ext.menu.Item({
text: gettext('Move Volume'),
selModel: me.selModel,
disabled: true,
-   dangerous: true,
handler: run_move,
});
 
+   let volumeaction_btn = new Proxmox.button.Button({
+   text: gettext('Volume Action'),
+   disabled: true,
+   menu: {
+   plain: true,
+   items: [
+   move_menuitem,
+   reassign_menuitem,
+   resize_menuitem,
+   ],
+   },
+   });
+
var revert_btn = new PVE.button.PendingRevert();
 
var set_button_status = function() {
@@ -243,7 +282,7 @@ Ext.define('PVE.lxc.RessourceView', {
if (!rec) {
edit_btn.disable();
remove_btn.disable();
-   resize_btn.disable();
+   volumeaction_btn.disable();
revert_btn.disable();
return;
}
@@ -253,6 +292,7 @@ Ext.define('PVE.lxc.RessourceView', {
 
var pending = rec.data.delete || me.hasPendingChanges(key);
var isDisk = rowdef.tdCls === 'pve-itype-icon-storage';
+   let isRootFS = rec.data.key === 'rootfs';
var isUnusedDisk = key.match(/^unused\d+/);

Re: [pve-devel] Vmbr bridge permissions and SDN improvements?

2022-03-07 Thread Neil Hawker
Hi Eneko

Thank you for the suggestion, we hadn’t thought about nested virtualization 
which is an interesting idea. My initial thoughts are this would create 
additional complexity with management of the platform (provisioning, 
authentication and licensing) and system overheads.

Your suggestion however, has given me the thought that we could use nested 
virtualization for pen testing purposes in future by having an all-in-one VM 
containing its sub vms/networks.

Ideally if the use of vmbr bridges could be restricted using permissions Spirit 
proposed in their changes, that would require minimal configuration changes for 
us to make particularly mid-academic year.

Thanks

From: Eneko Lacunza 
Sent: 07 March 2022 08:56
To: Proxmox VE development discussion ; Neil 
Hawker 
Subject: Re: [pve-devel] Vmbr bridge permissions and SDN improvements?

CAUTION !


This email was NOT sent using a University of Chester account, so we are unable 
to verify the identity of the sender. Do not click links or open attachments 
unless you recognise the sender and know the content is safe.

=

Hi Neil,

Have you considered using nested Proxmox servers, so that you only have the 
desired networks in students' nested Promoxes?

Cheers

El 4/3/22 a las 12:08, Neil Hawker escribió:

Hi,



We're currently using version 7.1-10 and have the use case where we need to 
hide the vmbr bridges from normal users to prevent them circumventing network 
security that is applied through SDN vNets.



For context, our setup is a Proxmox cluster that is used as a learning 
environment for students where they can create and manage their own VMs to 
practice their Cybersecurity skills in an isolated environment. Being able to 
hide the vmbr bridges from users would achieve this.



I have found on the community forum 
(https://forum.proxmox.com/threads/sdn-group-pool-permissions.93872)
 that Spirit had contributed changes that have yet to be accepted/merged in 
that would achieve this as well as some SDN GUI improvements.



I appreciate developers are very busy, but is it possible for Sprit's changes 
to be included in an upcoming version and if so, any rough idea when they might 
get released?



Thanks

Neil

___

pve-devel mailing list

pve-devel@lists.proxmox.com

https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel





Eneko Lacunza

Zuzendari teknikoa | Director técnico

Binovo IT Human Project



Tel. +34 943 569 206 | 
https://www.binovo.es

Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun



https://www.youtube.com/user/CANALBINOVO

https://www.linkedin.com/company/37269706/
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 0/2] close #2949: add virtio-mem support

2022-03-07 Thread Alexandre Derumier
This patch add virtio-mem support, through a new maxmemory option.

a 4GB static memory is needed for DMA+boot memory, as this memory
is almost always un-unpluggeable.

1 virtio-mem pci device is setup for each numa node on pci.4 bridge

virtio-mem use a fixed blocksize with 32k max blocksize,
so blocksize is computed from the maxmemory/32000 with a minimum of
2MB to map THP.
(lower blocksize = more chance to unplug memory).

Tested with debian11 guest with kernel 5.10.

more info about virtio-mem:
https://virtio-mem.gitlab.io/

Alexandre Derumier (2):
  add virtio-mem support
  tests: add virtio-mem tests

 PVE/QemuServer.pm   |   9 +-
 PVE/QemuServer/Memory.pm| 130 +++-
 PVE/QemuServer/PCI.pm   |   8 ++
 test/cfg2cmd/simple-virtio-mem-big.conf |  12 ++
 test/cfg2cmd/simple-virtio-mem-big.conf.cmd |  59 +
 test/cfg2cmd/simple-virtio-mem.conf |  13 ++
 test/cfg2cmd/simple-virtio-mem.conf.cmd |  31 +
 7 files changed, 233 insertions(+), 29 deletions(-)
 create mode 100644 test/cfg2cmd/simple-virtio-mem-big.conf
 create mode 100644 test/cfg2cmd/simple-virtio-mem-big.conf.cmd
 create mode 100644 test/cfg2cmd/simple-virtio-mem.conf
 create mode 100644 test/cfg2cmd/simple-virtio-mem.conf.cmd

-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 2/2] tests: add virtio-mem tests

2022-03-07 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 test/cfg2cmd/simple-virtio-mem-big.conf | 12 +
 test/cfg2cmd/simple-virtio-mem-big.conf.cmd | 59 +
 test/cfg2cmd/simple-virtio-mem.conf | 13 +
 test/cfg2cmd/simple-virtio-mem.conf.cmd | 31 +++
 4 files changed, 115 insertions(+)
 create mode 100644 test/cfg2cmd/simple-virtio-mem-big.conf
 create mode 100644 test/cfg2cmd/simple-virtio-mem-big.conf.cmd
 create mode 100644 test/cfg2cmd/simple-virtio-mem.conf
 create mode 100644 test/cfg2cmd/simple-virtio-mem.conf.cmd

diff --git a/test/cfg2cmd/simple-virtio-mem-big.conf 
b/test/cfg2cmd/simple-virtio-mem-big.conf
new file mode 100644
index 000..936da4b
--- /dev/null
+++ b/test/cfg2cmd/simple-virtio-mem-big.conf
@@ -0,0 +1,12 @@
+# TEST: virtio-mem with 128GB ram && 8 numa nodes
+maxmemory: 131072
+bootdisk: scsi0
+cores: 1
+memory: 8192
+name: simple
+numa: 1
+ostype: l26
+scsihw: virtio-scsi-pci
+smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
+sockets: 8
+vmgenid: c773c261-d800-4348-1010-1010add53cf8
diff --git a/test/cfg2cmd/simple-virtio-mem-big.conf.cmd 
b/test/cfg2cmd/simple-virtio-mem-big.conf.cmd
new file mode 100644
index 000..7962b62
--- /dev/null
+++ b/test/cfg2cmd/simple-virtio-mem-big.conf.cmd
@@ -0,0 +1,59 @@
+/usr/bin/kvm \
+  -id 8006 \
+  -name simple \
+  -no-shutdown \
+  -chardev 
'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server=on,wait=off' \
+  -mon 'chardev=qmp,mode=control' \
+  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
+  -mon 'chardev=qmp-event,mode=control' \
+  -pidfile /var/run/qemu-server/8006.pid \
+  -daemonize \
+  -smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
+  -smp '8,sockets=8,cores=1,maxcpus=8' \
+  -nodefaults \
+  -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
 \
+  -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
+  -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
+  -m 'size=4096,maxmem=131072M' \
+  -object 'memory-backend-ram,id=ram-node0,size=512M' \
+  -numa 'node,nodeid=0,cpus=0,memdev=ram-node0' \
+  -object 'memory-backend-ram,id=ram-node1,size=512M' \
+  -numa 'node,nodeid=1,cpus=1,memdev=ram-node1' \
+  -object 'memory-backend-ram,id=ram-node2,size=512M' \
+  -numa 'node,nodeid=2,cpus=2,memdev=ram-node2' \
+  -object 'memory-backend-ram,id=ram-node3,size=512M' \
+  -numa 'node,nodeid=3,cpus=3,memdev=ram-node3' \
+  -object 'memory-backend-ram,id=ram-node4,size=512M' \
+  -numa 'node,nodeid=4,cpus=4,memdev=ram-node4' \
+  -object 'memory-backend-ram,id=ram-node5,size=512M' \
+  -numa 'node,nodeid=5,cpus=5,memdev=ram-node5' \
+  -object 'memory-backend-ram,id=ram-node6,size=512M' \
+  -numa 'node,nodeid=6,cpus=6,memdev=ram-node6' \
+  -object 'memory-backend-ram,id=ram-node7,size=512M' \
+  -numa 'node,nodeid=7,cpus=7,memdev=ram-node7' \
+  -object 'memory-backend-ram,id=mem-virtiomem0,size=15872M' \
+  -object 'memory-backend-ram,id=mem-virtiomem1,size=15872M' \
+  -object 'memory-backend-ram,id=mem-virtiomem2,size=15872M' \
+  -object 'memory-backend-ram,id=mem-virtiomem3,size=15872M' \
+  -object 'memory-backend-ram,id=mem-virtiomem4,size=15872M' \
+  -object 'memory-backend-ram,id=mem-virtiomem5,size=15872M' \
+  -object 'memory-backend-ram,id=mem-virtiomem6,size=15872M' \
+  -object 'memory-backend-ram,id=mem-virtiomem7,size=15872M' \
+  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
+  -device 'pci-bridge,id=pci.4,chassis_nr=4,bus=pci.1,addr=0x1c' \
+  -device 'vmgenid,guid=c773c261-d800-4348-1010-1010add53cf8' \
+  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
+  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
+  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
+  -device 
'virtio-mem-pci,block-size=4M,requested-size=512M,id=virtiomem0,memdev=mem-virtiomem0,node=0,bus=pci.4,addr=0x4'
 \
+  -device 
'virtio-mem-pci,block-size=4M,requested-size=512M,id=virtiomem1,memdev=mem-virtiomem1,node=1,bus=pci.4,addr=0x5'
 \
+  -device 
'virtio-mem-pci,block-size=4M,requested-size=512M,id=virtiomem2,memdev=mem-virtiomem2,node=2,bus=pci.4,addr=0x6'
 \
+  -device 
'virtio-mem-pci,block-size=4M,requested-size=512M,id=virtiomem3,memdev=mem-virtiomem3,node=3,bus=pci.4,addr=0x7'
 \
+  -device 
'virtio-mem-pci,block-size=4M,requested-size=512M,id=virtiomem4,memdev=mem-virtiomem4,node=4,bus=pci.4,addr=0x8'
 \
+  -device 
'virtio-mem-pci,block-size=4M,requested-size=512M,id=virtiomem5,memdev=mem-virtiomem5,node=5,bus=pci.4,addr=0x9'
 \
+  -device 
'virtio-mem-pci,block-size=4M,requested-size=512M,id=virtiomem6,memdev=mem-virtiomem6,node=6,bus=pci.4,addr=0xa'
 \
+  -device 
'virtio-mem-pci,block-size=4M,requested-size=512M,id=virtiomem7,memdev=mem-virtiomem7,node=7,bus=pci.4,addr=0xb'
 \
+  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
+  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \

[pve-devel] [PATCH qemu-server 1/2] add virtio-mem support

2022-03-07 Thread Alexandre Derumier
This patch add virtio-mem support, through a new maxmemory option.

a 4GB static memory is needed for DMA+boot memory, as this memory
is almost always un-unpluggeable.

1 virtio-mem pci device is setup for each numa node on pci.4 bridge

virtio-mem use a fixed blocksize with 32k max blocksize,
so blocksize is computed from the maxmemory/32000 with a minimum of
2MB to map THP.
(lower blocksize = more chance to unplug memory).

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm|   9 ++-
 PVE/QemuServer/Memory.pm | 130 ++-
 PVE/QemuServer/PCI.pm|   8 +++
 3 files changed, 118 insertions(+), 29 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index bb44e58..a52ca95 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -340,6 +340,13 @@ my $confdesc = {
maximum => 262144,
default => 'cgroup v1: 1024, cgroup v2: 100',
 },
+maxmemory => {
+   optional => 1,
+   type => 'integer',
+   description => "Max hotpluggable virtio-mem memory",
+   minimum => 4096,
+   default => undef,
+},
 memory => {
optional => 1,
type => 'integer',
@@ -3758,7 +3765,7 @@ sub config_to_command {
push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off, 
$machine_version, $winversion, $gpu_passthrough);
 }
 
-PVE::QemuServer::Memory::config($conf, $vmid, $sockets, $cores, $defaults, 
$hotplug_features, $cmd);
+PVE::QemuServer::Memory::config($conf, $vmid, $sockets, $cores, $defaults, 
$hotplug_features, $cmd, $devices, $bridges, $arch, $machine_type);
 
 push @$cmd, '-S' if $conf->{freeze};
 
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index a41f5ae..9b7c33c 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -8,11 +8,48 @@ use PVE::Exception qw(raise raise_param_exc);
 
 use PVE::QemuServer;
 use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::PCI qw(print_pci_addr);
 
 my $MAX_NUMA = 8;
 my $MAX_MEM = 4194304;
 my $STATICMEM = 1024;
 
+my $compute_static_mem = sub {
+my ($conf, $defaults) = @_;
+
+my $sockets = 1;
+$sockets = $conf->{smp} if $conf->{smp}; # old style - no longer iused
+$sockets = $conf->{sockets} if $conf->{sockets};
+my $hotplug_features = 
PVE::QemuServer::parse_hotplug_features(defined($conf->{hotplug}) ? 
$conf->{hotplug} : '1');
+
+my $static_memory = 0;
+
+if ($hotplug_features->{memory} || $conf->{maxmemory}) {
+   $static_memory = $STATICMEM;
+   $static_memory = $static_memory * $sockets if ($conf->{hugepages} && 
$conf->{hugepages} == 1024);
+   $static_memory = 4096 if $conf->{maxmemory};
+} else {
+   $static_memory = $conf->{memory} || $defaults->{memory};
+}
+
+return $static_memory;
+};
+
+my $compute_virtiomem_block_size = sub {
+my ($conf, $static_memory) = @_;
+
+my $maxmemory = $conf->{maxmemory};
+return undef if !$maxmemory;
+
+#virtiomem can map 32000 block size. try to use lowerst blocksize, lower = 
more chance to unplug memory.
+my $blocksize = ($maxmemory - $static_memory) / 32000;
+#round next power of 2
+$blocksize = 2**(int(log($blocksize)/log(2))+1);
+#2MB is the minimum to be aligned with THP
+$blocksize = 2 if $blocksize < 2;
+return $blocksize;
+};
+
 sub get_numa_node_list {
 my ($conf) = @_;
 my @numa_map;
@@ -104,6 +141,8 @@ sub foreach_reverse_dimm {
 }
 }
 
+
+
 sub qemu_memory_hotplug {
 my ($vmid, $conf, $defaults, $opt, $value) = @_;
 
@@ -116,13 +155,45 @@ sub qemu_memory_hotplug {
 $value = $defaults->{memory} if !$value;
 return $value if $value == $memory;
 
-my $static_memory = $STATICMEM;
-$static_memory = $static_memory * $sockets if ($conf->{hugepages} && 
$conf->{hugepages} == 1024);
+my $static_memory = &$compute_static_mem($conf, $defaults);
+my $maxmemory = $conf->{maxmemory} || $MAX_MEM;
 
 die "memory can't be lower than $static_memory MB" if $value < 
$static_memory;
-die "you cannot add more memory than $MAX_MEM MB!\n" if $memory > $MAX_MEM;
+die "you cannot add more memory than $maxmemory MB!\n" if $value > 
$maxmemory;
+
+if ($conf->{maxmemory}) {
+
+   my $requested_size = ($value - $static_memory) / $sockets * 1024 * 1024;
 
-if($value > $memory) {
+   my $totalsize = $static_memory;
+   my $err = undef;
+
+   for (my $i = 0; $i < $sockets; $i++)  {
+
+   my $id = "virtiomem$i";
+   my $retry = 0;
+   mon_cmd($vmid, 'qom-set', path => "/machine/peripheral/$id", 
property => "requested-size", value => int($requested_size));
+
+   my $size = 0;
+   while (1) {
+   sleep 1;
+   $size = mon_cmd($vmid, 'qom-get', path => 
"/machine/peripheral/$id", property => "size");
+   $err = 1 if $retry > 5;
+   last if $size eq $requested_size || $retry > 5;
+   $retry++;
+   }
+

[pve-devel] [PATCH pve-docs] pve-network: Fix routed configuration example

2022-03-07 Thread Dylan Whyte
In my previous fixup, I forgot to update the interface name in the line
to enable proxy ARP.

Signed-off-by: Dylan Whyte 
---
 pve-network.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pve-network.adoc b/pve-network.adoc
index f92ba4d..c5e9a17 100644
--- a/pve-network.adoc
+++ b/pve-network.adoc
@@ -197,7 +197,7 @@ iface eno0 inet static
 address  198.51.100.5/29
 gateway  198.51.100.1
 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
-post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
+post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp
 
 
 auto vmbr0
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] Vmbr bridge permissions and SDN improvements?

2022-03-07 Thread DERUMIER, Alexandre
Hi,
my patches from october are here
https://lists.proxmox.com/pipermail/pve-devel/2021-October/050211.html

(does somebody have time to review them ?)


Le vendredi 04 mars 2022 à 11:08 +, Neil Hawker a écrit :
> Hi,
> 
> We're currently using version 7.1-10 and have the use case where we
> need to hide the vmbr bridges from normal users to prevent them
> circumventing network security that is applied through SDN vNets.
> 
> For context, our setup is a Proxmox cluster that is used as a
> learning environment for students where they can create and manage
> their own VMs to practice their Cybersecurity skills in an isolated
> environment. Being able to hide the vmbr bridges from users would
> achieve this.
> 
> I have found on the community forum
> (https://forum.proxmox.com/threads/sdn-group-pool-permissions.93872)
> that Spirit had contributed changes that have yet to be
> accepted/merged in that would achieve this as well as some SDN GUI
> improvements.
> 
> I appreciate developers are very busy, but is it possible for Sprit's
> changes to be included in an upcoming version and if so, any rough
> idea when they might get released?
> 
> Thanks
> Neil
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v11 qemu-server 01/14] device unplug: verify that unplugging scsi disk completed

2022-03-07 Thread Fabian Ebner
Avoids the error
  adding drive failed: Duplicate ID 'drive-scsi1' for drive
that could happen when switching over to a new disk (e.g. via qm set),
if unplugging wasn't fast enough.

Signed-off-by: Fabian Ebner 
---

New in v11.

 PVE/QemuServer.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 42f0fbd..b7e6a8e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4249,6 +4249,7 @@ sub vm_deviceunplug {
my $device = parse_drive($deviceid, $conf->{$deviceid});
 
qemu_devicedel($vmid, $deviceid);
+   qemu_devicedelverify($vmid, $deviceid);
qemu_drivedel($vmid, $deviceid);
qemu_deletescsihw($conf, $vmid, $deviceid);
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v11 qemu-server 04/14] clone disk: remove check for min QEMU version 2.7

2022-03-07 Thread Fabian Ebner
Upgrading a cluster node entails re-starting or migrating VMs and even
PVE 6.0 already had QEMU 4.0.

Signed-off-by: Fabian Ebner 
---

New in v11.

 PVE/QemuServer.pm | 7 ---
 1 file changed, 7 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b7e6a8e..c0fca49 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7642,15 +7642,8 @@ sub clone_disk {
qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, 
$sparseinit);
}
} else {
-
die "cannot move TPM state while VM is running\n" if $drivename eq 
'tpmstate0';
 
-   my $kvmver = get_running_qemu_version ($vmid);
-   if (!min_version($kvmver, 2, 7)) {
-   die "drive-mirror with iothread requires qemu version 2.7 or 
higher\n"
-   if $drive->{iothread};
-   }
-
qemu_drive_mirror($vmid, $drivename, $newvolid, $newvmid, 
$sparseinit, $jobs,
$completion, $qga, $bwlimit);
}
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v11 qemu-server 10/14] image convert: allow block device as source

2022-03-07 Thread Fabian Ebner
Necessary to import from an existing storage using block-device
volumes like ZFS.

Signed-off-by: Dominic Jäger 
[split into its own patch]
Signed-off-by: Fabian Ebner 
---

No changes from v10.

 PVE/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f1b1aa3..339536a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7312,7 +7312,7 @@ sub qemu_img_convert {
$src_path = PVE::Storage::path($storecfg, $src_volid, $snapname);
$src_is_iscsi = ($src_path =~ m|^iscsi://|);
$cachemode = 'none' if $src_scfg->{type} eq 'zfspool';
-} elsif (-f $src_volid) {
+} elsif (-f $src_volid || -b $src_volid) {
$src_path = $src_volid;
if ($src_path =~ m/\.($PVE::QemuServer::Drive::QEMU_FORMAT_RE)$/) {
$src_format = $1;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v11 qemu-server 02/14] api: create disks: always activate/update size when attaching existing volume

2022-03-07 Thread Fabian Ebner
For creation, activation and size update never triggered, because the
passed in $conf is essentially the same as the creation $settings, so
the disk was always detected to be the same as the "existing" one. But
actually, all disks are new, so it makes sense to do it.

For update, activation and size update nearly always triggered,
because only the pending changes are passed in as $conf. The case
where it didn't trigger is when the same pending change was made twice
(there are cases where hotplug isn't done, but makes it even more
unlikely).

Signed-off-by: Fabian Ebner 
---

New in v11.

 PVE/API2/Qemu.pm | 21 -
 1 file changed, 4 insertions(+), 17 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9be1caf..02b26d2 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -213,26 +213,13 @@ my $create_disks = sub {
delete $disk->{format}; # no longer needed
$res->{$ds} = PVE::QemuServer::print_drive($disk);
} else {
-
PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, 
$vmid, $volid);
 
-   my $volid_is_new = 1;
-
-   if ($conf->{$ds}) {
-   my $olddrive = PVE::QemuServer::parse_drive($ds, $conf->{$ds});
-   $volid_is_new = undef if $olddrive->{file} && $olddrive->{file} 
eq $volid;
-   }
-
-   if ($volid_is_new) {
+   PVE::Storage::activate_volumes($storecfg, [ $volid ]) if $storeid;
 
-   PVE::Storage::activate_volumes($storecfg, [ $volid ]) if 
$storeid;
-
-   my $size = PVE::Storage::volume_size_info($storecfg, $volid);
-
-   die "volume $volid does not exist\n" if !$size;
-
-   $disk->{size} = $size;
-   }
+   my $size = PVE::Storage::volume_size_info($storecfg, $volid);
+   die "volume $volid does not exist\n" if !$size;
+   $disk->{size} = $size;
 
$res->{$ds} = PVE::QemuServer::print_drive($disk);
}
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v11 qemu-server 14/14] api: update vm: print drive string for newly allocated/imported drives

2022-03-07 Thread Fabian Ebner
In the spirit of c75bf16 ("qm importdisk: tell user to what VM disk we
actually imported"), and so that the information is not lost once qm
importdisk switches to re-using the API call.

Added for cloudinit too, because a new disk is allocated.

Signed-off-by: Fabian Ebner 
---

New in v11.

The name for cloudinit is rather predictable, so not too sure if it's
worth it there.

 PVE/API2/Qemu.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c6d57e2..946fad0 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -318,6 +318,7 @@ my $create_disks = sub {
push @$vollist, $volid;
delete $disk->{format}; # no longer needed
$res->{$ds} = PVE::QemuServer::print_drive($disk);
+   print "$ds: successfully created disk '$res->{$ds}'\n";
} elsif ($volid =~ $NEW_DISK_RE) {
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
@@ -388,6 +389,8 @@ my $create_disks = sub {
delete $disk->{format}; # no longer needed
$res->{$ds} = PVE::QemuServer::print_drive($disk);
}
+
+   print "$ds: successfully created disk '$res->{$ds}'\n";
} else {
PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, 
$vmid, $volid);
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH-SERIES v11 qemu-server/manager] API for disk import and OVF

2022-03-07 Thread Fabian Ebner
Extend qm importdisk/importovf functionality to the API.

Changes from v10:
* Add fix for device unplug issue (patch #1).
* Add fixes related to calling create_disks() (patches #2 #3).
* Refactor clone_disk() in preparation to re-use it for import
  (patches #4 #5 #6).
* Add patch to print the newly allocated drive (patch #14).
* Switch to using clone_disk for PVE-managed volumes and check for
  VM.Clone in the permission check if there is an owner ID.
* Require :0 syntax when using import-from. Allowing
  other values than 0 for the size would be confusing, because
  with import-from that size is never used (the size of the source
  image is).
* Avoid making all foreach_volume iterators parse with the
  extended schema. Instead, provide a custom iterator for the
  places where it's actually required.

Still missing GUI integration for import from ovf, but that will be it's
own series.

Previous discussion:
https://lists.proxmox.com/pipermail/pve-devel/2022-January/051379.html


qemu-server:

Dominic Jäger (1):
  api: support VM disk import

Fabian Ebner (13):
  device unplug: verify that unplugging scsi disk completed
  api: create disks: always activate/update size when attaching existing
volume
  api: update: pass correct config when creating disks
  clone disk: remove check for min QEMU version 2.7
  clone disk: group source and target parameters
  clone disk: allow cloning from an unused or unreferenced disk
  schema: add pve-volume-id-or-absolute-path
  parse ovf: untaint path when calling file_size_info
  api: add endpoint for parsing .ovf files
  image convert: allow block device as source
  api: factor out check/cleanup for drive params
  schema: drive: use separate schema when disk allocation is possible
  api: update vm: print drive string for newly allocated/imported drives

 PVE/API2/Qemu.pm | 365 ++-
 PVE/API2/Qemu/Makefile   |   2 +-
 PVE/API2/Qemu/OVF.pm |  55 ++
 PVE/QemuServer.pm|  99 +++---
 PVE/QemuServer/Drive.pm  |  94 ++---
 PVE/QemuServer/ImportDisk.pm |   2 +-
 PVE/QemuServer/OVF.pm|   9 +-
 7 files changed, 487 insertions(+), 139 deletions(-)
 create mode 100644 PVE/API2/Qemu/OVF.pm


manager:

Fabian Ebner (1):
  api: nodes: add readovf endpoint

 PVE/API2/Nodes.pm | 7 +++
 1 file changed, 7 insertions(+)

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v11 qemu-server 11/14] api: factor out check/cleanup for drive params

2022-03-07 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---

New in v11.

 PVE/API2/Qemu.pm | 38 +-
 1 file changed, 21 insertions(+), 17 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 01321c8..791a23f 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -63,6 +63,23 @@ my $resolve_cdrom_alias = sub {
 }
 };
 
+my $check_drive_param = sub {
+my ($param, $storecfg, $extra_checks) = @_;
+
+for my $opt (sort keys $param->%*) {
+   next if !PVE::QemuServer::is_valid_drivename($opt);
+
+   my $drive = PVE::QemuServer::parse_drive($opt, $param->{$opt});
+   raise_param_exc({ $opt => "unable to parse drive options" }) if !$drive;
+
+   PVE::QemuServer::cleanup_drive_path($opt, $storecfg, $drive);
+
+   $extra_checks->($drive) if $extra_checks;
+
+   $param->{$opt} = PVE::QemuServer::print_drive($drive);
+}
+};
+
 my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
 my $check_storage_access = sub {
my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
@@ -617,15 +634,7 @@ __PACKAGE__->register_method({
 
&$check_cpu_model_access($rpcenv, $authuser, $param);
 
-   foreach my $opt (keys %$param) {
-   if (PVE::QemuServer::is_valid_drivename($opt)) {
-   my $drive = PVE::QemuServer::parse_drive($opt, 
$param->{$opt});
-   raise_param_exc({ $opt => "unable to parse drive options" 
}) if !$drive;
-
-   PVE::QemuServer::cleanup_drive_path($opt, $storecfg, 
$drive);
-   $param->{$opt} = PVE::QemuServer::print_drive($drive);
-   }
-   }
+   $check_drive_param->($param, $storecfg);
 
PVE::QemuServer::add_random_macs($param);
} else {
@@ -1195,15 +1204,10 @@ my $update_vm_api  = sub {
die "cannot add non-replicatable volume to a replicated VM\n";
 };
 
+$check_drive_param->($param, $storecfg, $check_replication);
+
 foreach my $opt (keys %$param) {
-   if (PVE::QemuServer::is_valid_drivename($opt)) {
-   # cleanup drive path
-   my $drive = PVE::QemuServer::parse_drive($opt, $param->{$opt});
-   raise_param_exc({ $opt => "unable to parse drive options" }) if 
!$drive;
-   PVE::QemuServer::cleanup_drive_path($opt, $storecfg, $drive);
-   $check_replication->($drive);
-   $param->{$opt} = PVE::QemuServer::print_drive($drive);
-   } elsif ($opt =~ m/^net(\d+)$/) {
+   if ($opt =~ m/^net(\d+)$/) {
# add macaddr
my $net = PVE::QemuServer::parse_net($param->{$opt});
$param->{$opt} = PVE::QemuServer::print_net($net);
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v11 qemu-server 06/14] clone disk: allow cloning from an unused or unreferenced disk

2022-03-07 Thread Fabian Ebner
and also when source and target drivename are different. In those
cases, it is done via qemu-img convert/dd.

In preparation to allow import from existing PVE-managed disks.

Signed-off-by: Fabian Ebner 
---

New in v11.

 PVE/API2/Qemu.pm  |  2 ++
 PVE/QemuServer.pm | 29 +++--
 2 files changed, 21 insertions(+), 10 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 14cac5b..01321c8 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3229,6 +3229,7 @@ __PACKAGE__->register_method({
my $dest_info = {
vmid => $newid,
conf => $oldconf, # because it's a clone
+   drivename => $opt,
storage => $storage,
format => $format,
};
@@ -3488,6 +3489,7 @@ __PACKAGE__->register_method({
my $dest_info = {
vmid => $vmid,
conf => $conf,
+   drivename => $disk,
storage => $storeid,
format => $format,
};
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 56437c5..0217d16 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7574,15 +7574,25 @@ sub clone_disk {
 my ($storecfg, $source, $dest, $full, $newvollist, $jobs, $completion, 
$qga, $bwlimit) = @_;
 
 my ($vmid, $running) = $source->@{qw(vmid running)};
-my ($drivename, $drive, $snapname) = $source->@{qw(drivename drive 
snapname)};
+my ($src_drivename, $drive, $snapname) = $source->@{qw(drivename drive 
snapname)};
 
-my ($newvmid, $conf) = $dest->@{qw(vmid conf)};
+my ($newvmid, $conf, $dst_drivename) = $dest->@{qw(vmid conf drivename)};
 my ($storage, $format) = $dest->@{qw(storage format)};
 
+if ($src_drivename && $dst_drivename && $src_drivename ne $dst_drivename) {
+   die "cloning from/to EFI disk requires EFI disk\n"
+   if $src_drivename eq 'efidisk0' || $dst_drivename eq 'efidisk0';
+   die "cloning from/to TPM state requires TPM state\n"
+   if $src_drivename eq 'tpmstate0' || $dst_drivename eq 'tpmstate0';
+}
+
 my $newvolid;
 
+print "create " . ($full ? 'full' : 'linked') . " clone of drive ";
+print "$src_drivename " if $src_drivename;
+print "($drive->{file})\n";
+
 if (!$full) {
-   print "create linked clone of drive $drivename ($drive->{file})\n";
$newvolid = PVE::Storage::vdisk_clone($storecfg,  $drive->{file}, 
$newvmid, $snapname);
push @$newvollist, $newvolid;
 } else {
@@ -7592,7 +7602,6 @@ sub clone_disk {
 
my $dst_format = resolve_dst_disk_format($storecfg, $storeid, $volname, 
$format);
 
-   print "create full clone of drive $drivename ($drive->{file})\n";
my $name = undef;
my $size = undef;
if (drive_is_cloudinit($drive)) {
@@ -7603,9 +7612,9 @@ sub clone_disk {
}
$snapname = undef;
$size = PVE::QemuServer::Cloudinit::CLOUDINIT_DISK_SIZE;
-   } elsif ($drivename eq 'efidisk0') {
+   } elsif ($dst_drivename eq 'efidisk0') {
$size = get_efivars_size($conf);
-   } elsif ($drivename eq 'tpmstate0') {
+   } elsif ($dst_drivename eq 'tpmstate0') {
$dst_format = 'raw';
$size = PVE::QemuServer::Drive::TPMSTATE_DISK_SIZE;
} else {
@@ -7629,9 +7638,9 @@ sub clone_disk {
}
 
my $sparseinit = PVE::Storage::volume_has_feature($storecfg, 
'sparseinit', $newvolid);
-   if (!$running || $snapname) {
+   if (!$running || !$src_drivename || $snapname) {
# TODO: handle bwlimits
-   if ($drivename eq 'efidisk0') {
+   if ($dst_drivename eq 'efidisk0') {
# the relevant data on the efidisk may be smaller than the 
source
# e.g. on RBD/ZFS, so we use dd to copy only the amount
# that is given by the OVMF_VARS.fd
@@ -7647,9 +7656,9 @@ sub clone_disk {
qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, 
$sparseinit);
}
} else {
-   die "cannot move TPM state while VM is running\n" if $drivename eq 
'tpmstate0';
+   die "cannot move TPM state while VM is running\n" if $src_drivename 
eq 'tpmstate0';
 
-   qemu_drive_mirror($vmid, $drivename, $newvolid, $newvmid, 
$sparseinit, $jobs,
+   qemu_drive_mirror($vmid, $src_drivename, $newvolid, $newvmid, 
$sparseinit, $jobs,
$completion, $qga, $bwlimit);
}
 }
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v11 qemu-server 09/14] api: add endpoint for parsing .ovf files

2022-03-07 Thread Fabian Ebner
Co-developed-by: Fabian Grünbichler 
Signed-off-by: Dominic Jäger 
[split into its own patch + minor improvements/style fixes]
Signed-off-by: Fabian Ebner 
---

Changes from v10:
* Add "Path to" to 'manifest' parameter description.

 PVE/API2/Qemu/Makefile |  2 +-
 PVE/API2/Qemu/OVF.pm   | 55 ++
 PVE/QemuServer.pm  | 32 
 3 files changed, 88 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Qemu/OVF.pm

diff --git a/PVE/API2/Qemu/Makefile b/PVE/API2/Qemu/Makefile
index 5d4abda..bdd4762 100644
--- a/PVE/API2/Qemu/Makefile
+++ b/PVE/API2/Qemu/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Agent.pm CPU.pm Machine.pm
+SOURCES=Agent.pm CPU.pm Machine.pm OVF.pm
 
 .PHONY: install
 install:
diff --git a/PVE/API2/Qemu/OVF.pm b/PVE/API2/Qemu/OVF.pm
new file mode 100644
index 000..5fa0ef0
--- /dev/null
+++ b/PVE/API2/Qemu/OVF.pm
@@ -0,0 +1,55 @@
+package PVE::API2::Qemu::OVF;
+
+use strict;
+use warnings;
+
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::QemuServer::OVF;
+use PVE::RESTHandler;
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+name => 'index',
+path => '',
+method => 'GET',
+proxyto => 'node',
+description => "Read an .ovf manifest.",
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   manifest => {
+   description => "Path to .ovf manifest.",
+   type => 'string',
+   },
+   },
+},
+returns => {
+   description => "VM config according to .ovf manifest.",
+   type => "object",
+},
+returns => {
+   type => 'object',
+   additionalProperties => 1,
+   properties => PVE::QemuServer::json_ovf_properties({}),
+},
+code => sub {
+   my ($param) = @_;
+
+   my $manifest = $param->{manifest};
+   die "check for file $manifest failed - $!\n" if !-f $manifest;
+
+   my $parsed = PVE::QemuServer::OVF::parse_ovf($manifest);
+   my $result;
+   $result->{cores} = $parsed->{qm}->{cores};
+   $result->{name} =  $parsed->{qm}->{name};
+   $result->{memory} = $parsed->{qm}->{memory};
+   my $disks = $parsed->{disks};
+   for my $disk (@$disks) {
+   $result->{$disk->{disk_address}} = $disk->{backing_file};
+   }
+   return $result;
+}});
+
+1;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b5fb457..f1b1aa3 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2203,6 +2203,38 @@ sub json_config_properties {
 return $prop;
 }
 
+# Properties that we can read from an OVF file
+sub json_ovf_properties {
+my $prop = shift;
+
+for my $device (PVE::QemuServer::Drive::valid_drive_names()) {
+   $prop->{$device} = {
+   type => 'string',
+   format => 'pve-volume-id-or-absolute-path',
+   description => "Disk image that gets imported to $device",
+   optional => 1,
+   };
+}
+
+$prop->{cores} = {
+   type => 'integer',
+   description => "The number of CPU cores.",
+   optional => 1,
+};
+$prop->{memory} = {
+   type => 'integer',
+   description => "Amount of RAM for the VM in MB.",
+   optional => 1,
+};
+$prop->{name} = {
+   type => 'string',
+   description => "Name of the VM.",
+   optional => 1,
+};
+
+return $prop;
+}
+
 # return copy of $confdesc_cloudinit to generate documentation
 sub cloudinit_config_properties {
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v11 qemu-server 13/14] api: support VM disk import

2022-03-07 Thread Fabian Ebner
From: Dominic Jäger 

Extend qm importdisk functionality to the API.

Co-authored-by: Fabian Grünbichler 
Co-authored-by: Dominic Jäger 
Signed-off-by: Fabian Ebner 
---

Changes from v10:
* Switch to using clone_disk for PVE-managed volumes and check for
  VM.Clone in the permission check if there is an owner ID.
* Require :0 syntax when using import-from. Allowing
  other values than 0 for the size would be confusing, because
  with import-from that size is never used (the size of the source
  image is). The check moved to check_drive_param, as that seemed
  to be the more fitting place.
* Avoid making all foreach_volume iterators parse with the
  extended schema. Instead, provide a custom iterator for the
  places where it's actually required.
* Mention that source volume should not be actively used in
  import-from description.
* Add missing newline to error for size check for source image,
  and also die when size is zero.

 PVE/API2/Qemu.pm | 215 ++-
 PVE/QemuServer/Drive.pm  |  34 +-
 PVE/QemuServer/ImportDisk.pm |   2 +-
 3 files changed, 216 insertions(+), 35 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index f30c56f..c6d57e2 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -21,8 +21,9 @@ use PVE::ReplicationConfig;
 use PVE::GuestHelpers;
 use PVE::QemuConfig;
 use PVE::QemuServer;
-use PVE::QemuServer::Drive;
 use PVE::QemuServer::CPUConfig;
+use PVE::QemuServer::Drive;
+use PVE::QemuServer::ImportDisk;
 use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuServer::Machine;
 use PVE::QemuMigrate;
@@ -63,28 +64,46 @@ my $resolve_cdrom_alias = sub {
 }
 };
 
+# Used in import-enabled API endpoints. Parses drives using the extended 
'_with_alloc' schema.
+my $foreach_volume_with_alloc = sub {
+my ($param, $func) = @_;
+
+for my $opt (sort keys $param->%*) {
+   next if !PVE::QemuServer::is_valid_drivename($opt);
+
+   my $drive = PVE::QemuServer::Drive::parse_drive($opt, $param->{$opt}, 
1);
+   next if !$drive;
+
+   $func->($opt, $drive);
+}
+};
+
+my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
+
 my $check_drive_param = sub {
 my ($param, $storecfg, $extra_checks) = @_;
 
 for my $opt (sort keys $param->%*) {
next if !PVE::QemuServer::is_valid_drivename($opt);
 
-   my $drive = PVE::QemuServer::parse_drive($opt, $param->{$opt});
+   my $drive = PVE::QemuServer::parse_drive($opt, $param->{$opt}, 1);
raise_param_exc({ $opt => "unable to parse drive options" }) if !$drive;
 
+   die "'import-from' requires special syntax - use :0,import-from=\n"
+   if $drive->{'import-from'} && ($drive->{file} !~ $NEW_DISK_RE || $3 
!= 0);
+
PVE::QemuServer::cleanup_drive_path($opt, $storecfg, $drive);
 
$extra_checks->($drive) if $extra_checks;
 
-   $param->{$opt} = PVE::QemuServer::print_drive($drive);
+   $param->{$opt} = PVE::QemuServer::print_drive($drive, 1);
 }
 };
 
-my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
 my $check_storage_access = sub {
my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
 
-   PVE::QemuConfig->foreach_volume($settings, sub {
+   $foreach_volume_with_alloc->($settings, sub {
my ($ds, $drive) = @_;
 
my $isCDROM = PVE::QemuServer::drive_is_cdrom($drive);
@@ -106,6 +125,20 @@ my $check_storage_access = sub {
} else {
PVE::Storage::check_volume_access($rpcenv, $authuser, $storecfg, 
$vmid, $volid);
}
+
+   if (my $src_image = $drive->{'import-from'}) {
+   my $src_vmid;
+   my ($src_storeid) = PVE::Storage::parse_volume_id($src_image, 1);
+   if ($src_storeid) { # PVE-managed volume
+   $src_vmid = (PVE::Storage::parse_volname($storecfg, 
$src_image))[2]
+   }
+
+   if ($src_vmid) { # might be actively used by VM and will be copied 
via clone_disk()
+   $rpcenv->check($authuser, "/vms/${src_vmid}", ['VM.Clone']);
+   } else {
+   PVE::Storage::check_volume_access($rpcenv, $authuser, 
$storecfg, $vmid, $src_image);
+   }
+   }
 });
 
$rpcenv->check($authuser, "/storage/$settings->{vmstatestorage}", 
['Datastore.AllocateSpace'])
@@ -164,6 +197,87 @@ my $check_storage_access_migrate = sub {
if !$scfg->{content}->{images};
 };
 
+my $import_from_volid = sub {
+my ($storecfg, $src_volid, $dest_info, $vollist) = @_;
+
+die "cannot import from cloudinit disk\n"
+   if PVE::QemuServer::Drive::drive_is_cloudinit({ file => $src_volid });
+
+my ($src_storeid, $src_volname) = 
PVE::Storage::parse_volume_id($src_volid);
+my $src_vmid = (PVE::Storage::parse_volname($storecfg, $src_volid))[2];
+
+my $src_vm_state = sub {
+   my $exists = $src_vmid && 
PVE::Cluster::get_vmlist()->{ids}->{$src_vmid} ? 1 : 0;
+
+   my $runs = 0;
+   

[pve-devel] [PATCH v11 qemu-server 08/14] parse ovf: untaint path when calling file_size_info

2022-03-07 Thread Fabian Ebner
Prepare for calling parse_ovf via API, where the -T switch is used.

Signed-off-by: Fabian Ebner 
---

Changes from v10:
* Move untaint to outside of the function call.

 PVE/QemuServer/OVF.pm | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuServer/OVF.pm b/PVE/QemuServer/OVF.pm
index 0376cbf..b97b052 100644
--- a/PVE/QemuServer/OVF.pm
+++ b/PVE/QemuServer/OVF.pm
@@ -221,10 +221,11 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", 
$controller_id);
die "error parsing $filepath, file seems not to exist at 
$backing_file_path\n";
}
 
-   my $virtual_size;
-   if ( !($virtual_size = 
PVE::Storage::file_size_info($backing_file_path)) ) {
-   die "error parsing $backing_file_path, size seems to be 
$virtual_size\n";
-   }
+   ($backing_file_path) = $backing_file_path =~ m|^(/.*)|; # untaint
+
+   my $virtual_size = PVE::Storage::file_size_info($backing_file_path);
+   die "error parsing $backing_file_path, cannot determine file size\n"
+   if !$virtual_size;
 
$pve_disk = {
disk_address => $pve_disk_address,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v11 qemu-server 07/14] schema: add pve-volume-id-or-absolute-path

2022-03-07 Thread Fabian Ebner
Signed-off-by: Dominic Jäger 
[split into its own patch + style fixes]
Signed-off-by: Fabian Ebner 
---

No changes from v10.

 PVE/QemuServer.pm | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0217d16..b5fb457 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1036,11 +1036,17 @@ 
PVE::JSONSchema::register_format('pve-volume-id-or-qm-path', \&verify_volume_id_
 sub verify_volume_id_or_qm_path {
 my ($volid, $noerr) = @_;
 
-if ($volid eq 'none' || $volid eq 'cdrom' || $volid =~ m|^/|) {
-   return $volid;
-}
+return $volid if $volid eq 'none' || $volid eq 'cdrom';
+
+return verify_volume_id_or_absolute_path($volid, $noerr);
+}
+
+PVE::JSONSchema::register_format('pve-volume-id-or-absolute-path', 
\&verify_volume_id_or_absolute_path);
+sub verify_volume_id_or_absolute_path {
+my ($volid, $noerr) = @_;
+
+return $volid if $volid =~ m|^/|;
 
-# if its neither 'none' nor 'cdrom' nor a path, check if its a volume-id
 $volid = eval { PVE::JSONSchema::check_format('pve-volume-id', $volid, '') 
};
 if ($@) {
return if $noerr;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v11 qemu-server 03/14] api: update: pass correct config when creating disks

2022-03-07 Thread Fabian Ebner
While the new options should be written to the pending config, the
decisions (currently only one) in create_disks needs to be made for
the current config.

Seems to fix EFI disk creation, but actually, it's only
future-proofing, because, currently, the same OVMF_VARS file is
used independently of $smm.

The correct config is also needed to determine the correct size for
the EFI disk for the upcoming import-from feature.

Signed-off-by: Fabian Ebner 
---

New in v11.

 PVE/API2/Qemu.pm | 32 
 1 file changed, 24 insertions(+), 8 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 02b26d2..c6587ef 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -237,12 +237,7 @@ my $create_disks = sub {
die $err;
 }
 
-# modify vm config if everything went well
-foreach my $ds (keys %$res) {
-   $conf->{$ds} = $res->{$ds};
-}
-
-return $vollist;
+return ($vollist, $res);
 };
 
 my $check_cpu_model_access = sub {
@@ -712,7 +707,18 @@ __PACKAGE__->register_method({
 
my $vollist = [];
eval {
-   $vollist = &$create_disks($rpcenv, $authuser, $conf, $arch, 
$storecfg, $vmid, $pool, $param, $storage);
+   ($vollist, my $created_opts) = $create_disks->(
+   $rpcenv,
+   $authuser,
+   $conf,
+   $arch,
+   $storecfg,
+   $vmid,
+   $pool,
+   $param,
+   $storage,
+   );
+   $conf->{$_} = $created_opts->{$_} for keys 
$created_opts->%*;
 
if (!$conf->{boot}) {
my $devs = 
PVE::QemuServer::get_default_bootdevices($conf);
@@ -1364,7 +1370,17 @@ my $update_vm_api  = sub {
PVE::QemuServer::vmconfig_register_unused_drive($storecfg, 
$vmid, $conf, PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt}))
if defined($conf->{pending}->{$opt});
 
-   &$create_disks($rpcenv, $authuser, $conf->{pending}, $arch, 
$storecfg, $vmid, undef, {$opt => $param->{$opt}});
+   my (undef, $created_opts) = $create_disks->(
+   $rpcenv,
+   $authuser,
+   $conf,
+   $arch,
+   $storecfg,
+   $vmid,
+   undef,
+   {$opt => $param->{$opt}},
+   );
+   $conf->{pending}->{$_} = $created_opts->{$_} for keys 
$created_opts->%*;
 
# default legacy boot order implies all cdroms anyway
if (@bootorder) {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v11 qemu-server 05/14] clone disk: group source and target parameters

2022-03-07 Thread Fabian Ebner
to make the interface more digestible.

Signed-off-by: Fabian Ebner 
---

New in v11.

 PVE/API2/Qemu.pm  | 52 +++
 PVE/QemuServer.pm |  9 ++--
 2 files changed, 41 insertions(+), 20 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c6587ef..14cac5b 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3218,23 +3218,31 @@ __PACKAGE__->register_method({
push @$storage_list, $storage if defined($storage);
my $clonelimit = PVE::Storage::get_bandwidth_limit('clone', 
$storage_list, $bwlimit);
 
+   my $source_info = {
+   vmid => $vmid,
+   running => $running,
+   drivename => $opt,
+   drive => $drive,
+   snapname => $snapname,
+   };
+
+   my $dest_info = {
+   vmid => $newid,
+   conf => $oldconf, # because it's a clone
+   storage => $storage,
+   format => $format,
+   };
+
my $newdrive = PVE::QemuServer::clone_disk(
$storecfg,
-   $vmid,
-   $running,
-   $opt,
-   $drive,
-   $snapname,
-   $newid,
-   $storage,
-   $format,
+   $source_info,
+   $dest_info,
$fullclone->{$opt},
$newvollist,
$jobs,
$completion,
$oldconf->{agent},
$clonelimit,
-   $oldconf
);
 
$newconf->{$opt} = PVE::QemuServer::print_drive($newdrive);
@@ -3469,23 +3477,31 @@ __PACKAGE__->register_method({
$bwlimit
);
 
+   my $source_info = {
+   vmid => $vmid,
+   running => $running,
+   drivename => $disk,
+   drive => $drive,
+   snapname => undef,
+   };
+
+   my $dest_info = {
+   vmid => $vmid,
+   conf => $conf,
+   storage => $storeid,
+   format => $format,
+   };
+
my $newdrive = PVE::QemuServer::clone_disk(
$storecfg,
-   $vmid,
-   $running,
-   $disk,
-   $drive,
-   undef,
-   $vmid,
-   $storeid,
-   $format,
+   $source_info,
+   $dest_info,
1,
$newvollist,
undef,
undef,
undef,
$movelimit,
-   $conf,
);
$conf->{$disk} = PVE::QemuServer::print_drive($newdrive);
 
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c0fca49..56437c5 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7571,8 +7571,13 @@ sub qemu_blockjobs_cancel {
 }
 
 sub clone_disk {
-my ($storecfg, $vmid, $running, $drivename, $drive, $snapname,
-   $newvmid, $storage, $format, $full, $newvollist, $jobs, $completion, 
$qga, $bwlimit, $conf) = @_;
+my ($storecfg, $source, $dest, $full, $newvollist, $jobs, $completion, 
$qga, $bwlimit) = @_;
+
+my ($vmid, $running) = $source->@{qw(vmid running)};
+my ($drivename, $drive, $snapname) = $source->@{qw(drivename drive 
snapname)};
+
+my ($newvmid, $conf) = $dest->@{qw(vmid conf)};
+my ($storage, $format) = $dest->@{qw(storage format)};
 
 my $newvolid;
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v11 manager 1/1] api: nodes: add readovf endpoint

2022-03-07 Thread Fabian Ebner
Because the paths under /nodes/{node}/qemu/ are already occupied by
a {vmid} regex, it's not possible to use /nodes/{node}/qemu/readovf
for the new call. As the call does not depend upon a particular vmid,
it's placed under /nodes/{node} instead.

Signed-off-by: Dominic Jäger 
[split into its own patch + add to index]
Signed-off-by: Fabian Ebner 
---

Needs dependency bump for qemu-server.

Changes from v10:
* Add reason for placing it directly under /nodes/{node} to commit
  message.

 PVE/API2/Nodes.pm | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 655493a3..f595808a 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -49,6 +49,7 @@ use PVE::API2::LXC;
 use PVE::API2::Network;
 use PVE::API2::NodeConfig;
 use PVE::API2::Qemu::CPU;
+use PVE::API2::Qemu::OVF;
 use PVE::API2::Qemu;
 use PVE::API2::Replication;
 use PVE::API2::Services;
@@ -71,6 +72,11 @@ __PACKAGE__->register_method ({
 path => 'qemu',
 });
 
+__PACKAGE__->register_method ({
+subclass => "PVE::API2::Qemu::OVF",
+path => 'readovf',
+});
+
 __PACKAGE__->register_method ({
 subclass => "PVE::API2::LXC",
 path => 'lxc',
@@ -233,6 +239,7 @@ __PACKAGE__->register_method ({
{ name => 'network' },
{ name => 'qemu' },
{ name => 'query-url-metadata' },
+   { name => 'readovf' },
{ name => 'replication' },
{ name => 'report' },
{ name => 'rrd' }, # fixme: remove?
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v11 qemu-server 12/14] schema: drive: use separate schema when disk allocation is possible

2022-03-07 Thread Fabian Ebner
via the special syntax :.

Not worth it by itself, but this is anticipating a new 'import-from'
parameter which is only used upon import/allocation, but shouldn't be
part of the schema for the config or other API enpoints.

Signed-off-by: Fabian Ebner 
---

Changes from v10:
* Add initial space when appending parameter description.

 PVE/API2/Qemu.pm| 12 ++--
 PVE/QemuServer.pm   |  9 --
 PVE/QemuServer/Drive.pm | 62 +
 3 files changed, 60 insertions(+), 23 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 791a23f..f30c56f 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -569,7 +569,9 @@ __PACKAGE__->register_method({
default => 0,
description => "Start VM after it was created 
successfully.",
},
-   }),
+   },
+   1, # with_disk_alloc
+   ),
 },
 returns => {
type => 'string',
@@ -1552,7 +1554,9 @@ __PACKAGE__->register_method({
maximum => 30,
optional => 1,
},
-   }),
+   },
+   1, # with_disk_alloc
+   ),
 },
 returns => {
type => 'string',
@@ -1600,7 +1604,9 @@ __PACKAGE__->register_method({
maxLength => 40,
optional => 1,
},
-   }),
+   },
+   1, # with_disk_alloc
+   ),
 },
 returns => { type => 'null' },
 code => sub {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 339536a..2c0a9e2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2184,7 +2184,7 @@ sub verify_usb_device {
 
 # add JSON properties for create and set function
 sub json_config_properties {
-my $prop = shift;
+my ($prop, $with_disk_alloc) = @_;
 
 my $skip_json_config_opts = {
parent => 1,
@@ -2197,7 +2197,12 @@ sub json_config_properties {
 
 foreach my $opt (keys %$confdesc) {
next if $skip_json_config_opts->{$opt};
-   $prop->{$opt} = $confdesc->{$opt};
+
+   if ($with_disk_alloc && is_valid_drivename($opt)) {
+   $prop->{$opt} = 
$PVE::QemuServer::Drive::drivedesc_hash_with_alloc->{$opt};
+   } else {
+   $prop->{$opt} = $confdesc->{$opt};
+   }
 }
 
 return $prop;
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 7b82fb2..d5d4723 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -3,6 +3,8 @@ package PVE::QemuServer::Drive;
 use strict;
 use warnings;
 
+use Storable qw(dclone);
+
 use PVE::Storage;
 use PVE::JSONSchema qw(get_standard_option);
 
@@ -33,6 +35,8 @@ our $MAX_SATA_DISKS = 6;
 our $MAX_UNUSED_DISKS = 256;
 
 our $drivedesc_hash;
+# Schema when disk allocation is possible.
+our $drivedesc_hash_with_alloc = {};
 
 my %drivedesc_base = (
 volume => { alias => 'file' },
@@ -262,14 +266,10 @@ my $ide_fmt = {
 };
 PVE::JSONSchema::register_format("pve-qm-ide", $ide_fmt);
 
-my $ALLOCATION_SYNTAX_DESC =
-"Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume.";
-
 my $idedesc = {
 optional => 1,
 type => 'string', format => $ide_fmt,
-description => "Use volume as IDE hard disk or CD-ROM (n is 0 to " 
.($MAX_IDE_DISKS -1) . "). " .
-   $ALLOCATION_SYNTAX_DESC,
+description => "Use volume as IDE hard disk or CD-ROM (n is 0 to " 
.($MAX_IDE_DISKS - 1) . ").",
 };
 PVE::JSONSchema::register_standard_option("pve-qm-ide", $idedesc);
 
@@ -285,8 +285,7 @@ my $scsi_fmt = {
 my $scsidesc = {
 optional => 1,
 type => 'string', format => $scsi_fmt,
-description => "Use volume as SCSI hard disk or CD-ROM (n is 0 to " . 
($MAX_SCSI_DISKS - 1) . "). " .
-   $ALLOCATION_SYNTAX_DESC,
+description => "Use volume as SCSI hard disk or CD-ROM (n is 0 to " . 
($MAX_SCSI_DISKS - 1) . ").",
 };
 PVE::JSONSchema::register_standard_option("pve-qm-scsi", $scsidesc);
 
@@ -298,8 +297,7 @@ my $sata_fmt = {
 my $satadesc = {
 optional => 1,
 type => 'string', format => $sata_fmt,
-description => "Use volume as SATA hard disk or CD-ROM (n is 0 to " . 
($MAX_SATA_DISKS - 1). "). " .
-   $ALLOCATION_SYNTAX_DESC,
+description => "Use volume as SATA hard disk or CD-ROM (n is 0 to " . 
($MAX_SATA_DISKS - 1). ").",
 };
 PVE::JSONSchema::register_standard_option("pve-qm-sata", $satadesc);
 
@@ -311,8 +309,7 @@ my $virtio_fmt = {
 my $virtiodesc = {
 optional => 1,
 type => 'string', format => $virtio_fmt,
-description => "Use volume as VIRTIO hard disk (n is 0 to " . 
($MAX_VIRTIO_DISKS - 1) . "). " .
-   $ALLOCATION_SYNTAX_DESC,
+description => "Use volume as VIRTIO hard disk (n is 0 to " . 
($MAX_VIRTIO_DISKS - 1) . ").",
 };
 PVE::JSONSchema::register_standard_option("pve-qm-virtio", $virtiodesc);
 
@@ -359,9 +356,7 @@ my $efidisk_fmt = {
 my $efidisk_desc = {
 optional => 1,
 type => 'string', format => $efidisk_fmt,
-description => "Con

Re: [pve-devel] [PATCH pve-manager v2] fix #3903: jobs: add remove vmid from jobs helper

2022-03-07 Thread Fabian Ebner
Am 07.03.22 um 07:43 schrieb Hannes Laimer:
> Signed-off-by: Hannes Laimer 
> ---
> changed back to v1, but without the unnecessary stuff. Thanks for the
> feedback @Fabian Ebner
> 
>  PVE/Jobs.pm | 17 -
>  1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/Jobs.pm b/PVE/Jobs.pm
> index ba3685ec..db6fa97d 100644
> --- a/PVE/Jobs.pm
> +++ b/PVE/Jobs.pm
> @@ -4,7 +4,7 @@ use strict;
>  use warnings;
>  use JSON;
>  
> -use PVE::Cluster qw(cfs_read_file cfs_lock_file);
> +use PVE::Cluster qw(cfs_read_file cfs_lock_file cfs_write_file);
>  use PVE::Jobs::Plugin;
>  use PVE::Jobs::VZDump;
>  use PVE::Tools;
> @@ -274,6 +274,21 @@ sub synchronize_job_states_with_config {
>  die $@ if $@;
>  }
>  
> +sub remove_vmid_from_jobs {
> +my ($vmid) = @_;
> +
> +cfs_lock_file('jobs.cfg', undef, sub {
> + my $jobs_data = cfs_read_file('jobs.cfg');
> + for my $id (keys %{$jobs_data->{ids}}) {
> + my $job = $jobs_data->{ids}->{$id};
> + next if !defined($job->{vmid});
> + $job->{vmid} = join(',', grep { $_ ne $vmid } 
> PVE::Tools::split_list($job->{vmid}));
> + delete $jobs_data->{ids}->{$id} if $job->{vmid} eq '';

There is a remove_job() function that's supposed to be called when a job
is removed. It'll be called by synchronize_job_states_with_config too,
but it'd be cleaner to call it directly.

Also, the old behavior is to remove a VM ID upon purge from 'exclude'
too. For consistency, we need do that here too. See
remove_vmid_from_jobs in guest-common's PVE/VZDump/Plugin.pm for comparison.

'exclude' is specific to backups, so there should be a plugin method for
removing a VMID from a job, which the VZDump plugin overrides, and the
iterator here should just call the method from the job's plugin. Well,
technically, 'vmid' is also specific to backups, because it's not part
of the defaultData properties of the generic plugin.

> + }
> +cfs_write_file('jobs.cfg', $jobs_data);
> +});
> +}
> +
>  sub setup_dirs {
>  mkdir $state_dir;
>  mkdir $lock_dir;


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server] api: vm_start: 'force-cpu' is for internal migration use only

2022-03-07 Thread Oguz Bektas
'force-cpu' parameter was introduced to allow live-migration of VMs with
custom CPU models; it does not need to be allowed for general use on
vm_start for regular users, since they would be able to set arbitrary
cpu types or cpuid parameters that aren't supported.

Signed-off-by: Oguz Bektas 
---
 PVE/API2/Qemu.pm | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9be1caf..68077cc 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2287,9 +2287,7 @@ __PACKAGE__->register_method({
my $node = extract_param($param, 'node');
my $vmid = extract_param($param, 'vmid');
my $timeout = extract_param($param, 'timeout');
-
my $machine = extract_param($param, 'machine');
-   my $force_cpu = extract_param($param, 'force-cpu');
 
my $get_root_param = sub {
my $value = extract_param($param, $_[0]);
@@ -2304,6 +2302,7 @@ __PACKAGE__->register_method({
my $migration_type = $get_root_param->('migration_type');
my $migration_network = $get_root_param->('migration_network');
my $targetstorage = $get_root_param->('targetstorage');
+   my $force_cpu = $get_root_param->('force-cpu');
 
my $storagemap;
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH kernel] Backport two io-wq fixes relevant for io_uring

2022-03-07 Thread Mark Schouten via pve-devel
--- Begin Message ---
Hi,

Sorry for getting back on this thread after a few months, but is the 
Windows-case mentioned here the case that is discussed in this forum-thread:
https://forum.proxmox.com/threads/windows-vms-stuck-on-boot-after-proxmox-upgrade-to-7-0.100744/page-3
 


?

If so, should this be investigated further or are there other issues? I have 
personally not had the issue mentioned in the forum, but quite a few people 
seem to be suffering from issues with Windows VMs, which is currently holding 
us back from upgrading from 6.x to 7.x on a whole bunch of customer clusters.

Thanks,

— 
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl



> On 23 Nov 2021, at 12:59, Fabian Ebner  wrote:
> 
> There were quite a few reports in the community forum about Windows
> VMs with SATA disks not working after upgrading to kernel 5.13.
> Issue was reproducible during the installation of Win2019 (suggested
> by Thomas), and it's already fixed in 5.15. Bisecting led to
>io-wq: split bounded and unbounded work into separate lists
> as the commit fixing the issue.
> 
> Indeed, the commit states
>Fixes: ecc53c48c13d ("io-wq: check max_worker limits if a worker 
> transitions bound state")
> which is present as a backport in ubuntu-impish:
>f9eb79f840052285408ae9082dc4419dc1397954
> 
> The first backport
>io-wq: fix queue stalling race
> also sounds nice to have and additionally served as a preparation for
> the second one to apply more cleanly.
> 
> Signed-off-by: Fabian Ebner 
> ---
> .../0010-io-wq-fix-queue-stalling-race.patch  |  72 +++
> ...ded-and-unbounded-work-into-separate.patch | 415 ++
> 2 files changed, 487 insertions(+)
> create mode 100644 patches/kernel/0010-io-wq-fix-queue-stalling-race.patch
> create mode 100644 
> patches/kernel/0011-io-wq-split-bounded-and-unbounded-work-into-separate.patch
> 
> diff --git a/patches/kernel/0010-io-wq-fix-queue-stalling-race.patch 
> b/patches/kernel/0010-io-wq-fix-queue-stalling-race.patch
> new file mode 100644
> index 000..5ef160d
> --- /dev/null
> +++ b/patches/kernel/0010-io-wq-fix-queue-stalling-race.patch
> @@ -0,0 +1,72 @@
> +From  Mon Sep 17 00:00:00 2001
> +From: Jens Axboe 
> +Date: Tue, 31 Aug 2021 13:53:00 -0600
> +Subject: [PATCH] io-wq: fix queue stalling race
> +
> +We need to set the stalled bit early, before we drop the lock for adding
> +us to the stall hash queue. If not, then we can race with new work being
> +queued between adding us to the stall hash and io_worker_handle_work()
> +marking us stalled.
> +
> +Signed-off-by: Jens Axboe 
> +[backport]
> +Signed-off-by: Fabian Ebner 
> +---
> + fs/io-wq.c | 15 +++
> + 1 file changed, 7 insertions(+), 8 deletions(-)
> +
> +diff --git a/fs/io-wq.c b/fs/io-wq.c
> +index 6612d0aa497e..33678185f3bc 100644
> +--- a/fs/io-wq.c
>  b/fs/io-wq.c
> +@@ -437,8 +437,7 @@ static bool io_worker_can_run_work(struct io_worker 
> *worker,
> + }
> + 
> + static struct io_wq_work *io_get_next_work(struct io_wqe *wqe,
> +-   struct io_worker *worker,
> +-   bool *stalled)
> ++   struct io_worker *worker)
> + __must_hold(wqe->lock)
> + {
> + struct io_wq_work_node *node, *prev;
> +@@ -476,10 +475,14 @@ static struct io_wq_work *io_get_next_work(struct 
> io_wqe *wqe,
> + }
> + 
> + if (stall_hash != -1U) {
> ++/*
> ++ * Set this before dropping the lock to avoid racing with new
> ++ * work being added and clearing the stalled bit.
> ++ */
> ++wqe->flags |= IO_WQE_FLAG_STALLED;
> + raw_spin_unlock(&wqe->lock);
> + io_wait_on_hash(wqe, stall_hash);
> + raw_spin_lock(&wqe->lock);
> +-*stalled = true;
> + }
> + 
> + return NULL;
> +@@ -519,7 +522,6 @@ static void io_worker_handle_work(struct io_worker 
> *worker)
> + 
> + do {
> + struct io_wq_work *work;
> +-bool stalled;
> + get_next:
> + /*
> +  * If we got some work, mark us as busy. If we didn't, but
> +@@ -528,12 +530,9 @@ static void io_worker_handle_work(struct io_worker 
> *worker)
> +  * can't make progress, any work completion or insertion will
> +  * clear the stalled flag.
> +  */
> +-stalled = false;
> +-work = io_get_next_work(wqe, worker, &stalled);
> ++work = io_get_next_work(wqe, worker);
> + if (work)
> + __io_worker_busy(wqe, worker, work);
> +-else if (stalled)
> +-wqe->flags |= IO_WQE_FLAG_STALLED;
> + 
> + raw_spin_unlock_irq(&wqe->lock);
> + if (!work)
> +-- 
> +2.30.2
> +
> diff --git 
> a/patches/kernel/0011-io