Re: [pve-devel] sdn: looking to unify .cfg files, need opinions about config format

2021-04-20 Thread alexandre derumier

Hi Thomas!

On 18/04/2021 19:01, Thomas Lamprecht wrote:

On 12.01.21 10:19, aderum...@odiso.com wrote:

Hi,

I'm looking to unify sdn .cfg files with only 1 file,
with something different than section config format.

We have relationship like  zones->vnets->subnets,
so I was thinking about something like this:



[zone myzone]
type: vxlan
option1: xxx
option2: xxx
[[vnet myvnet]]
option1: xxx
option2: xxx
[[[subnet 10.0.0.0/8]]]
option1: xxx
option2: xxx


[controller  mycontroller]
type: evpn
option1: xxx
option2: xxx

[dns  mydns]
type: powerdns
option1: xxx
option2: xxx


What do you think about this ?

That looks like section config, just spelled differently?

But yes, the way section config does schema and types are not ideal when 
combined
with quite different things.

Maybe we should really just go the simple way and keep it separated for now.

For zones it works good this way, there exist different types and we can use 
that as
section config type. Subnets and vnets could be combined as vnets are really 
not that
specially I guess?


I think that maybe the only thing that could be improve is indeed 
subnets/vnets.


currently, we can have same subnet range defined on zones.

but they are really different object, as gateway or other subnet option 
can be different,


that's why I'm doing concatenate zone+subnet to have a unique subnetid, 
something like


subnet: zone1-192.168.0.0-24
    vnet vnet1
    gateway 192.168.0.1

subnet: zone2-192.168.0.0-24
    vnet vnet2
    gateway 192.168.0.254

It's not bad, but maybe it could be better, with defining subnet 
somewhere inside the vnet directly.


Not sure how the config format should be to handle this ?





We had a mail about what would be Ok to merge, but I do not remember/find it 
anymore...


Small reminder of other related patches:


pve-network:
[pve-devel] [PATCH pve-network 0/2] evpn && bgp improvements
https://www.mail-archive.com/pve-devel@lists.proxmox.com/msg03265.html

(2 small patches)

pve-manager:

[PATCH V11 pve-manager 1/1] sdn: add subnet/ipam/sdn management

https://www.mail-archive.com/pve-devel@lists.proxmox.com/msg02746.html

(I had merged and rebased the differents patches from previous series)

pve-cluster:

[PATCH V5 pve-cluster 0/5] sdn : add subnets management

https://lists.proxmox.com/pipermail/pve-devel/2020-September/045284.html


pve-common:

INotify: add support for dummy interfaces type

(this is a small patch for ebgp loopback/dummy interface support)

https://www.mail-archive.com/pve-devel@lists.proxmox.com/msg01755.html


pve-container: (maybe we could wait a little bit to finish qemu support too)

[PATCH pve-container] add ipam support
https://lists.proxmox.com/pipermail/pve-devel/2021-January/046609.html



Another way could be a simple yaml config file. (but I think it's not
really matching currents proxmox configs formats)


I do not like yaml to much, it looks simple first but can do way to much (turing
complete, IIRC) and we do not really use it anywhere atm., so that would mean 
lots
of new tooling/work to do to handle it sanely and as first-class citizen in PVE
stack...

My goal would be to do a pve-network bump at end of next week, and for that we
need pve-cluster bump.

Currently there we get three new configs:

1. ipams, different management plugins (types), so OK to be own section config
2. dns, different, APIs/DNS servers (types), so OK to be own section config
3. subnets, only one type, or?

subnet only 1 type indeed

hmm, rethinking this now it could be OK to keep as is... While subnets could
possibly be merged into vnets, there's a mediocre benefit there, API could
maybe even get more complicated?


not sure about api, but if current config format with subnetid is ok for 
you, it's ok for me ;)




If we'd bump now the biggest thing missing is applying an IP to a VM and CT.

For a CT we can quite easily do it.

yes, I have already send patches, maybe it need more testing.


For a VM we could even need to support different ways?

* DHCP (?)


for dhcp, It'll be more difficult for bridged setup, as we need an 1 by 
subnet.


for routed setup, it's more easy.

I think we should see that later, I have an idea about managing some 
kind of


gateways edges vms/appliance feature like vmware nsx edge gateway.

https://bugzilla.proxmox.com/show_bug.cgi?id=3382

where you could manage this kind of central service (dhcp, vpn, nat 1:1, 
balancing,...).


(a lot of users use pfsense for exemple or other gateway applicance, my 
idea is to manage this kind of appliance through api, or maybe manage 
our own appliance)


This should works with any kind of network, bridged/routed, or any zone 
(vlan/vxlan/...)


But It's a big thing, so later ;)


* cloudinit


yes, this is my current plan.

offline, it's easy;

online, it's more difficult.

That's why I was working on cloudinit too recently, with pending 
features,...


I need to do more test wi

[pve-devel] [PATCH storage] rbd: fix typo in error message

2021-04-20 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 PVE/Storage/RBDPlugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 42641e2..a8d1243 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -503,7 +503,7 @@ sub alloc_image {
 $name = $class->find_free_diskname($storeid, $scfg, $vmid) if !$name;
 
 my $cmd = $rbd_cmd->($scfg, $storeid, 'create', '--image-format' , 2, 
'--size', int(($size+1023)/1024), $name);
-run_rbd_command($cmd, errmsg => "rbd create $name' error");
+run_rbd_command($cmd, errmsg => "rbd create '$name' error");
 
 return $name;
 }
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v4 3/9] ceph: set allowed minimal pg_num down to 1

2021-04-20 Thread Dominik Csapak
From: Alwin Antreich 

In Ceph Octopus the device_health_metrics pool is auto-created with 1
PG. Since Ceph has the ability to split/merge PGs, hitting the wrong PG
count is now less of an issue anyhow.

Signed-off-by: Alwin Antreich 
Signed-off-by: Dominik Csapak 
---
 PVE/API2/Ceph/Pools.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 014e6be7..939a1f8a 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -175,7 +175,7 @@ my $ceph_pool_common_options = sub {
type => 'integer',
default => 128,
optional => 1,
-   minimum => 8,
+   minimum => 1,
maximum => 32768,
},
pg_num_min => {
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v4 7/9] API2/Ceph/Pools: remove unnecessary boolean conversion

2021-04-20 Thread Dominik Csapak
we do nothing with that field, so leave it like it is

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Ceph/Pools.pm | 1 -
 1 file changed, 1 deletion(-)

diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 939a1f8a..45f0c47c 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -26,7 +26,6 @@ my $get_autoscale_status = sub {
 
 my $data;
 foreach my $p (@$autoscale) {
-   $p->{would_adjust} = "$p->{would_adjust}"; # boolean
$data->{$p->{pool_name}} = $p;
 }
 
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v4 5/9] ceph: gui: add min num of PG

2021-04-20 Thread Dominik Csapak
From: Alwin Antreich 

this is used to fine-tune the ceph autoscaler

Signed-off-by: Alwin Antreich 
Signed-off-by: Dominik Csapak 
---
 www/manager6/ceph/Pool.js | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index e19f8beb..236ed0bc 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -143,6 +143,15 @@ Ext.define('PVE.CephPoolInputPanel', {
userCls: 'pmx-hint',
value: 'Target Size Ratio takes precedence.',
},
+   {
+   xtype: 'proxmoxintegerfield',
+   fieldLabel: 'Min. # of PGs',
+   name: 'pg_num_min',
+   labelWidth: 140,
+   minValue: 0,
+   allowBlank: true,
+   emptyText: '0',
+   },
 ],
 
 onGetValues: function(values) {
@@ -250,6 +259,14 @@ Ext.define('PVE.node.CephPoolList', {
return value;
},
},
+   {
+   text: gettext('Min. # of PGs'),
+   flex: 1,
+   minWidth: 140,
+   align: 'right',
+   dataIndex: 'pg_num_min',
+   hidden: true,
+   },
{
text: gettext('Target Size Ratio'),
flex: 1,
@@ -426,6 +443,7 @@ Ext.define('PVE.node.CephPoolList', {
  { name: 'size', type: 'integer' },
  { name: 'min_size', type: 'integer' },
  { name: 'pg_num', type: 'integer' },
+ { name: 'pg_num_min', type: 'integer' },
  { name: 'bytes_used', type: 'integer' },
  { name: 'percent_used', type: 'number' },
  { name: 'crush_rule', type: 'integer' },
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v4 1/9] ceph: add autoscale_status to api calls

2021-04-20 Thread Dominik Csapak
From: Alwin Antreich 

the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set the
correct amount of PGs.

Signed-off-by: Alwin Antreich 
Signed-off-by: Dominik Csapak 
---
 PVE/API2/Ceph/Pools.pm | 96 +-
 PVE/CLI/pveceph.pm |  4 ++
 2 files changed, 90 insertions(+), 10 deletions(-)

diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 01c11100..014e6be7 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -16,6 +16,24 @@ use PVE::API2::Storage::Config;
 
 use base qw(PVE::RESTHandler);
 
+my $get_autoscale_status = sub {
+my ($rados) = shift;
+
+$rados = PVE::RADOS->new() if !defined($rados);
+
+my $autoscale = $rados->mon_command({
+   prefix => 'osd pool autoscale-status'});
+
+my $data;
+foreach my $p (@$autoscale) {
+   $p->{would_adjust} = "$p->{would_adjust}"; # boolean
+   $data->{$p->{pool_name}} = $p;
+}
+
+return $data;
+};
+
+
 __PACKAGE__->register_method ({
 name => 'lspools',
 path => '',
@@ -37,16 +55,21 @@ __PACKAGE__->register_method ({
items => {
type => "object",
properties => {
-   pool => { type => 'integer', title => 'ID' },
-   pool_name => { type => 'string', title => 'Name' },
-   size => { type => 'integer', title => 'Size' },
-   min_size => { type => 'integer', title => 'Min Size' },
-   pg_num => { type => 'integer', title => 'PG Num' },
-   pg_autoscale_mode => { type => 'string', optional => 1, title 
=> 'PG Autoscale Mode' },
-   crush_rule => { type => 'integer', title => 'Crush Rule' },
-   crush_rule_name => { type => 'string', title => 'Crush Rule 
Name' },
-   percent_used => { type => 'number', title => '%-Used' },
-   bytes_used => { type => 'integer', title => 'Used' },
+   pool  => { type => 'integer', title => 'ID' },
+   pool_name => { type => 'string',  title => 'Name' },
+   size  => { type => 'integer', title => 'Size' },
+   min_size  => { type => 'integer', title => 'Min Size' },
+   pg_num=> { type => 'integer', title => 'PG Num' },
+   pg_num_min=> { type => 'integer', title => 'min. PG 
Num', optional => 1, },
+   pg_num_final  => { type => 'integer', title => 'Optimal PG 
Num', optional => 1, },
+   pg_autoscale_mode => { type => 'string',  title => 'PG 
Autoscale Mode', optional => 1, },
+   crush_rule=> { type => 'integer', title => 'Crush Rule' 
},
+   crush_rule_name   => { type => 'string',  title => 'Crush Rule 
Name' },
+   percent_used  => { type => 'number',  title => '%-Used' },
+   bytes_used=> { type => 'integer', title => 'Used' },
+   target_size   => { type => 'integer', title => 'PG 
Autoscale Target Size', optional => 1 },
+   target_size_ratio => { type => 'number',  title => 'PG 
Autoscale Target Ratio',optional => 1, },
+   autoscale_status  => { type => 'object',  title => 'Autoscale 
Status', optional => 1 },
},
},
links => [ { rel => 'child', href => "{pool_name}" } ],
@@ -86,12 +109,24 @@ __PACKAGE__->register_method ({
'pg_autoscale_mode',
];
 
+   # pg_autoscaler module is not enabled in Nautilus
+   my $autoscale = eval { $get_autoscale_status->($rados) };
+
foreach my $e (@{$res->{pools}}) {
my $d = {};
foreach my $attr (@$attr_list) {
$d->{$attr} = $e->{$attr} if defined($e->{$attr});
}
 
+   if ($autoscale) {
+   $d->{autoscale_status} = $autoscale->{$d->{pool_name}};
+   $d->{pg_num_final} = $d->{autoscale_status}->{pg_num_final};
+   # some info is nested under options instead
+   $d->{pg_num_min} = $e->{options}->{pg_num_min};
+   $d->{target_size} = $e->{options}->{target_size_bytes};
+   $d->{target_size_ratio} = $e->{options}->{target_size_ratio};
+   }
+
if (defined($d->{crush_rule}) && 
defined($rules->{$d->{crush_rule}})) {
$d->{crush_rule_name} = $rules->{$d->{crush_rule}};
}
@@ -143,6 +178,13 @@ my $ceph_pool_common_options = sub {
minimum => 8,
maximum => 32768,
},
+   pg_num_min => {
+   title => 'min. PG Num',
+   description => "Minimal number of placement groups.",
+   type => 'integer',
+   optional => 1,
+   maximum => 32768,
+   },
   

[pve-devel] [PATCH manager v4 6/9] fix: ceph: always set pool size first

2021-04-20 Thread Dominik Csapak
From: Alwin Antreich 

Since Ceph Nautilus 14.2.10 and Octopus 15.2.2 the min_size of a pool is
calculated by the size (round(size / 2)). When size is applied after
min_size to the pool, the manual specified min_size will be overwritten.

Signed-off-by: Alwin Antreich 
Signed-off-by: Dominik Csapak 
---
 PVE/Ceph/Tools.pm | 61 +++
 1 file changed, 40 insertions(+), 21 deletions(-)

diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index ab38f7bc..9d4d595f 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -200,33 +200,52 @@ sub check_ceph_enabled {
 return 1;
 }
 
+my $set_pool_setting = sub {
+my ($pool, $setting, $value) = @_;
+
+my $command;
+if ($setting eq 'application') {
+   $command = {
+   prefix => "osd pool application enable",
+   pool   => "$pool",
+   app=> "$value",
+   };
+} else {
+   $command = {
+   prefix => "osd pool set",
+   pool   => "$pool",
+   var=> "$setting",
+   val=> "$value",
+   format => 'plain',
+   };
+}
+
+my $rados = PVE::RADOS->new();
+eval { $rados->mon_command($command); };
+return $@ ? $@ : undef;
+};
+
 sub set_pool {
 my ($pool, $param) = @_;
 
-foreach my $setting (keys %$param) {
-   my $value = $param->{$setting};
-
-   my $command;
-   if ($setting eq 'application') {
-   $command = {
-   prefix => "osd pool application enable",
-   pool   => "$pool",
-   app=> "$value",
-   };
+# by default, pool size always sets min_size,
+# set it and forget it, as first item
+# https://tracker.ceph.com/issues/44862
+if ($param->{size}) {
+   my $value = $param->{size};
+   if (my $err = $set_pool_setting->($pool, 'size', $value)) {
+   print "$err";
} else {
-   $command = {
-   prefix => "osd pool set",
-   pool   => "$pool",
-   var=> "$setting",
-   val=> "$value",
-   format => 'plain',
-   };
+   delete $param->{size};
}
+}
+
+foreach my $setting (keys %$param) {
+   my $value = $param->{$setting};
+   next if $setting eq 'size';
 
-   my $rados = PVE::RADOS->new();
-   eval { $rados->mon_command($command); };
-   if ($@) {
-   print "$@";
+   if (my $err = $set_pool_setting->($pool, $setting, $value)) {
+   print "$err";
} else {
delete $param->{$setting};
}
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v4 9/9] ui: ceph/Pool: show progress on pool edit/create

2021-04-20 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 www/manager6/ceph/Pool.js | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 45333f4d..430decbb 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -201,6 +201,8 @@ Ext.define('PVE.CephPoolEdit', {
method: get => get('isCreate') ? 'POST' : 'PUT',
 },
 
+showProgress: true,
+
 subject: gettext('Ceph Pool'),
 
 items: [{
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v4 4/9] ceph: gui: rework pool input panel

2021-04-20 Thread Dominik Csapak
From: Alwin Antreich 

* add the ability to edit an existing pool
* allow adjustment of autoscale settings
* warn if user specifies min_size 1
* disallow min_size 1 on pool create
* calculate min_size replica by size

Signed-off-by: Alwin Antreich 
Signed-off-by: Dominik Csapak 
---
 www/manager6/ceph/Pool.js | 246 +-
 1 file changed, 189 insertions(+), 57 deletions(-)

diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 7f341ce8..e19f8beb 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -1,17 +1,21 @@
-Ext.define('PVE.CephCreatePool', {
-extend: 'Proxmox.window.Edit',
-alias: 'widget.pveCephCreatePool',
+Ext.define('PVE.CephPoolInputPanel', {
+extend: 'Proxmox.panel.InputPanel',
+xtype: 'pveCephPoolInputPanel',
+mixins: ['Proxmox.Mixin.CBind'],
 
 showProgress: true,
 onlineHelp: 'pve_ceph_pools',
 
 subject: 'Ceph Pool',
-isCreate: true,
-method: 'POST',
-items: [
+column1: [
{
-   xtype: 'textfield',
+   xtype: 'pmxDisplayEditField',
fieldLabel: gettext('Name'),
+   cbind: {
+   editable: '{isCreate}',
+   value: '{pool_name}',
+   disabled: '{!isCreate}',
+   },
name: 'name',
allowBlank: false,
},
@@ -20,75 +24,179 @@ Ext.define('PVE.CephCreatePool', {
fieldLabel: gettext('Size'),
name: 'size',
value: 3,
-   minValue: 1,
+   minValue: 2,
maxValue: 7,
allowBlank: false,
+   listeners: {
+   change: function(field, val) {
+   let size = Math.round(val / 2);
+   if (size > 1) {
+   
field.up('inputpanel').down('field[name=min_size]').setValue(size);
+   }
+   },
+   },
+   },
+],
+column2: [
+   {
+   xtype: 'proxmoxKVComboBox',
+   fieldLabel: 'PG Autoscale Mode',
+   name: 'pg_autoscale_mode',
+   comboItems: [
+   ['warn', 'warn'],
+   ['on', 'on'],
+   ['off', 'off'],
+   ],
+   value: 'warn',
+   allowBlank: false,
+   autoSelect: false,
+   labelWidth: 140,
},
+   {
+   xtype: 'proxmoxcheckbox',
+   fieldLabel: gettext('Add as Storage'),
+   cbind: {
+   value: '{isCreate}',
+   hidden: '{!isCreate}',
+   },
+   name: 'add_storages',
+   labelWidth: 140,
+   autoEl: {
+   tag: 'div',
+   'data-qtip': gettext('Add the new pool to the cluster storage 
configuration.'),
+   },
+   },
+],
+advancedColumn1: [
{
xtype: 'proxmoxintegerfield',
fieldLabel: gettext('Min. Size'),
name: 'min_size',
value: 2,
-   minValue: 1,
+   cbind: {
+   minValue: (get) => get('isCreate') ? 2 : 1,
+   },
maxValue: 7,
allowBlank: false,
+   listeners: {
+   change: function(field, val) {
+   let warn = true;
+   let warn_text = gettext('Min. Size');
+
+   if (val < 2) {
+   warn = false;
+   warn_text = gettext('Min. Size') + ' ';
+   }
+
+   
field.up().down('field[name=min_size-warning]').setHidden(warn);
+   field.setFieldLabel(warn_text);
+   },
+   },
+   },
+   {
+   xtype: 'displayfield',
+   name: 'min_size-warning',
+   userCls: 'pmx-hint',
+   value: 'A pool with min_size=1 could lead to data loss, incomplete 
PGs or unfound objects.',
+   hidden: true,
},
{
xtype: 'pveCephRuleSelector',
fieldLabel: 'Crush Rule', // do not localize
+   cbind: { nodename: '{nodename}' },
name: 'crush_rule',
allowBlank: false,
},
-   {
-   xtype: 'proxmoxKVComboBox',
-   fieldLabel: 'PG Autoscale Mode', // do not localize
-   name: 'pg_autoscale_mode',
-   comboItems: [
-   ['warn', 'warn'],
-   ['on', 'on'],
-   ['off', 'off'],
-   ],
-   value: 'warn',
-   allowBlank: false,
-   autoSelect: false,
-   },
{
xtype: 'proxmoxintegerfield',
-   fieldLabel: 'pg_num',
+   fieldLabel: '# of PGs',
name: 'pg_num',
value: 128,
-   minValue: 8,
+   minValue: 1,
maxValue: 32768,
+   allowBlank: false,
+   emptyText: 128,
+   },
+],
+advancedColumn2: [
+   {
+   xtype: 'numberfield',
+   fieldLabel: gettext('Targe

[pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed

2021-04-20 Thread Dominik Csapak
originally from Alwin Antreich

mostly rebase on master, a few eslint fixes (squashed into alwins
commits) and 3 small fixups

Alwin Antreich (6):
  ceph: add autoscale_status to api calls
  ceph: gui: add autoscale & flatten pool view
  ceph: set allowed minimal pg_num down to 1
  ceph: gui: rework pool input panel
  ceph: gui: add min num of PG
  fix: ceph: always set pool size first

Dominik Csapak (3):
  API2/Ceph/Pools: remove unnecessary boolean conversion
  ui: ceph/Pools: improve number checking for target_size
  ui: ceph/Pool: show progress on pool edit/create

 PVE/API2/Ceph/Pools.pm|  97 +++--
 PVE/CLI/pveceph.pm|   4 +
 PVE/Ceph/Tools.pm |  61 --
 www/manager6/ceph/Pool.js | 401 +++---
 4 files changed, 422 insertions(+), 141 deletions(-)

-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v4 2/9] ceph: gui: add autoscale & flatten pool view

2021-04-20 Thread Dominik Csapak
From: Alwin Antreich 

Letting the columns flex needs a flat column head structure.

Signed-off-by: Alwin Antreich 
Signed-off-by: Dominik Csapak 
---
 www/manager6/ceph/Pool.js | 138 ++
 1 file changed, 82 insertions(+), 56 deletions(-)

diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 5dabd4e6..7f341ce8 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -105,14 +105,16 @@ Ext.define('PVE.node.CephPoolList', {
 
 columns: [
{
-   header: gettext('Name'),
-   width: 120,
+   text: gettext('Name'),
+   minWidth: 120,
+   flex: 2,
sortable: true,
dataIndex: 'pool_name',
},
{
-   header: gettext('Size') + '/min',
-   width: 100,
+   text: gettext('Size') + '/min',
+   minWidth: 100,
+   flex: 1,
align: 'right',
renderer: function(v, meta, rec) {
return v + '/' + rec.data.min_size;
@@ -120,62 +122,82 @@ Ext.define('PVE.node.CephPoolList', {
dataIndex: 'size',
},
{
-   text: 'Placement Groups',
-   columns: [
-   {
-   text: '# of PGs', // pg_num',
-   width: 150,
-   align: 'right',
-   dataIndex: 'pg_num',
-   },
-   {
-   text: gettext('Autoscale'),
-   width: 140,
-   align: 'right',
-   dataIndex: 'pg_autoscale_mode',
-   },
-   ],
+   text: '# of Placement Groups',
+   flex: 1,
+   minWidth: 150,
+   align: 'right',
+   dataIndex: 'pg_num',
},
{
-   text: 'CRUSH Rule',
-   columns: [
-   {
-   text: 'ID',
-   align: 'right',
-   width: 50,
-   dataIndex: 'crush_rule',
-   },
-   {
-   text: gettext('Name'),
-   width: 150,
-   dataIndex: 'crush_rule_name',
-   },
-   ],
+   text: gettext('Optimal # of PGs'),
+   flex: 1,
+   minWidth: 140,
+   align: 'right',
+   dataIndex: 'pg_num_final',
+   renderer: function(value, metaData) {
+   if (!value) {
+   value = ' n/a';
+   metaData.tdAttr = 'data-qtip="Needs pg_autoscaler module 
enabled."';
+   }
+   return value;
+   },
},
{
-   text: gettext('Used'),
-   columns: [
-   {
-   text: '%',
-   width: 100,
-   sortable: true,
-   align: 'right',
-   renderer: function(val) {
-   return Ext.util.Format.percent(val, '0.00');
-   },
-   dataIndex: 'percent_used',
-   },
-   {
-   text: gettext('Total'),
-   width: 100,
-   sortable: true,
-   renderer: PVE.Utils.render_size,
-   align: 'right',
-   dataIndex: 'bytes_used',
-   summaryType: 'sum',
-   summaryRenderer: PVE.Utils.render_size,
-   },
-   ],
+   text: gettext('Target Size Ratio'),
+   flex: 1,
+   minWidth: 140,
+   align: 'right',
+   dataIndex: 'target_size_ratio',
+   renderer: Ext.util.Format.numberRenderer('0.'),
+   hidden: true,
+   },
+   {
+   text: gettext('Target Size'),
+   flex: 1,
+   minWidth: 140,
+   align: 'right',
+   dataIndex: 'target_size',
+   hidden: true,
+   renderer: function(v, metaData, rec) {
+   let value = PVE.Utils.render_size(v);
+   if (rec.data.target_size_ratio > 0) {
+   value = ' ' + value;
+   metaData.tdAttr = 'data-qtip="Target Size Ratio takes 
precedence over Target Size."';
+   }
+   return value;
+   },
+   },
+   {
+   text: gettext('Autoscale Mode'),
+   flex: 1,
+   minWidth: 140,
+   align: 'right',
+   dataIndex: 'pg_autoscale_mode',
+   },
+   {
+   text: 'CRUSH Rule (ID)',
+   flex: 1,
+   align: 'right',
+   minWidth: 150,
+   renderer: function(v, meta, rec) {
+   return v + ' (' + rec.data.crush_rule + ')';
+   },
+   dataIndex: 'crush_rule_name',
+   },
+   {
+   text: gettext('Used') + ' (%)',
+   flex: 1,
+   minWidth: 180,
+   sortable: true,
+   align: 'right',
+   dataIndex: 'bytes_

[pve-devel] [PATCH manager v4 8/9] ui: ceph/Pools: improve number checking for target_size

2021-04-20 Thread Dominik Csapak
the field gives us a string, so the second condition could never
be true, instead parse to a float instead

Signed-off-by: Dominik Csapak 
---
 www/manager6/ceph/Pool.js | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 236ed0bc..45333f4d 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -161,15 +161,20 @@ Ext.define('PVE.CephPoolInputPanel', {
}
});
 
-   if (Ext.isNumber(values.target_size) && values.target_size !== 0) {
-   values.target_size = values.target_size*1024*1024*1024;
+   let target_size = Number.parseFloat(values.target_size);
+
+   if (Ext.isNumber(target_size) && target_size !== 0) {
+   values.target_size = (target_size*1024*1024*1024).toFixed(0);
}
+
return values;
 },
 
 setValues: function(values) {
-   if (Ext.isNumber(values.target_size) && values.target_size !== 0) {
-   values.target_size = values.target_size/1024/1024/1024;
+   let target_size = Number.parseFloat(values.target_size);
+
+   if (Ext.isNumber(target_size) && target_size !== 0) {
+   values.target_size = target_size/1024/1024/1024;
}
 
this.callParent([values]);
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage] diskmanage: get_partnum: fix check

2021-04-20 Thread Fabian Ebner
Not replacing it with return, because the current behavior is dying:
Can't "next" outside a loop block
and the single existing caller in pve-manager's API2/Ceph/OSD.pm does not check
the return value.

Also check for $st, which can be undefined in case a non-existing path was
provided. This also led to dying previously:
Can't call method "mode" on an undefined value

Signed-off-by: Fabian Ebner 
---
 PVE/Diskmanage.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 853d333..b916d2e 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -750,7 +750,9 @@ sub get_partnum {
 
 my $st = stat($part_path);
 
-next if !$st->mode || !S_ISBLK($st->mode) || !$st->rdev;
+die "error detecting block device '$part_path'\n"
+   if !$st || !$st->mode || !S_ISBLK($st->mode) || !$st->rdev;
+
 my $major = PVE::Tools::dev_t_major($st->rdev);
 my $minor = PVE::Tools::dev_t_minor($st->rdev);
 my $partnum_path = "/sys/dev/block/$major:$minor/";
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH access-control] fix typo in oathkeygen: randon -> random

2021-04-20 Thread Lorenz Stechauner
Signed-off-by: Lorenz Stechauner 
---
 oathkeygen | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/oathkeygen b/oathkeygen
index 89e385a..82e4eec 100755
--- a/oathkeygen
+++ b/oathkeygen
@@ -6,6 +6,6 @@ use MIME::Base32; #libmime-base32-perl
 
 my $test;
 open(RND, "/dev/urandom");
-sysread(RND, $test, 10) == 10 || die "read randon data failed\n";
+sysread(RND, $test, 10) == 10 || die "read random data failed\n";
 print MIME::Base32::encode_rfc3548($test) . "\n";
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 qemu-server] fix #3369: auto-start vm after failed stopmode backup

2021-04-20 Thread Dylan Whyte
Fixes an issue in which a VM/CT fails to automatically restart after a
failed stop-mode backup.

Also fixes a minor typo in a comment

Signed-off-by: Dylan Whyte 
---

Note:

v1->v2:
- Fix the issue from within PVE::VZDump::QemuServer, rather than adding
  tedious sleep call and state checking in PVE::VZDump.

 PVE/VZDump/QemuServer.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 8920ac1f..42a60fc7 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -551,6 +551,7 @@ sub archive_pbs {
 if ($err) {
$self->logerr($err);
$self->mon_backup_cancel($vmid);
+   $self->resume_vm_after_job_start($task, $vmid);
 }
 $self->restore_vm_power_state($vmid);
 
@@ -729,6 +730,7 @@ sub archive_vma {
 if ($err) {
$self->logerr($err);
$self->mon_backup_cancel($vmid);
+   $self->resume_vm_after_job_start($task, $vmid);
 }
 
 $self->restore_vm_power_state($vmid);
@@ -815,7 +817,7 @@ sub enforce_vm_running_for_backup {
 die $@ if $@;
 }
 
-# resume VM againe once we got in a clear state (stop mode backup of running 
VM)
+# resume VM again once in a clear state (stop mode backup of running VM)
 sub resume_vm_after_job_start {
 my ($self, $task, $vmid) = @_;
 
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] ui: qemu/Config: disable xtermjs and spice until status is loaded

2021-04-20 Thread Dominik Csapak
We enable/disable spice/xtermjs for the console button in the 'load'
callback of the statusstore, depending on the vms capabilities,
but until the first load there, the only safe option is novnc.

So we have to disable xtermjs and spice on start, else a click on
the button might open a window that cannot connect to the vm.

a forum user probably triggered this:
https://forum.proxmox.com/threads/unable-to-find-serial-interface-console-problem.87705

Signed-off-by: Dominik Csapak 
---
 www/manager6/qemu/Config.js | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/www/manager6/qemu/Config.js b/www/manager6/qemu/Config.js
index 10bf10a4..b5f8cc9c 100644
--- a/www/manager6/qemu/Config.js
+++ b/www/manager6/qemu/Config.js
@@ -199,6 +199,8 @@ Ext.define('PVE.qemu.Config', {
disabled: !caps.vms['VM.Console'],
hidden: template,
consoleType: 'kvm',
+   enableSpice: false,
+   enableXtermjs: false,
consoleName: vm.name,
nodename: nodename,
vmid: vmid,
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager] report: add multipath.conf and wwids file

2021-04-20 Thread Thomas Lamprecht
On 16.04.21 15:15, Mira Limbeck wrote:
> These 2 files can be helpful for issues with multipath. The multipath -v3
> output is too large most of the time and not required for analyzing and
> solving the issues.
> 
> Signed-off-by: Mira Limbeck 
> ---
>  PVE/Report.pm | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH manager 1/2] Fix #2053: OSD destroy only on specified node

2021-04-20 Thread Thomas Lamprecht
On 11.01.21 12:42, Dominic Jäger wrote:
> Allow destroying only OSDs that belong to the node that has been specified in
> the API path.
> 
> So if
>  - OSD 1 belongs to node A and
>  - OSD 2 belongs to node B
> then
>  - pvesh delete nodes/A/ceph/osd/1 is allowed but
>  - pvesh delete nodes/A/ceph/osd/2 is not
> 
> Destroying an OSD via GUI automatically inserts the correct node
> into the API path.
> 
> pveceph automatically insert the local node into the API call, too.
> Consequently, it can now only destroy local OSDs (fix #2053).
>  - pveceph osd destroy 1 is allowed on node A but
>  - pveceph osd destroy 2 is not
> 
> Signed-off-by: Dominic Jäger 
> ---
>  PVE/API2/Ceph/OSD.pm | 25 +
>  1 file changed, 25 insertions(+)
> 
>

applied, thanks!

I reworked the the filtering/parsing the $tree result a bit, maybe you can 
re-check
if all seem OK to you? The test are def. good and helped here - thanks for them!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage] diskmanage: get_partnum: fix check

2021-04-20 Thread Thomas Lamprecht
On 20.04.21 14:07, Fabian Ebner wrote:
> Not replacing it with return, because the current behavior is dying:
> Can't "next" outside a loop block
> and the single existing caller in pve-manager's API2/Ceph/OSD.pm does not 
> check
> the return value.
> 
> Also check for $st, which can be undefined in case a non-existing path was
> provided. This also led to dying previously:
> Can't call method "mode" on an undefined value
> 
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/Diskmanage.pm | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH access-control] fix typo in oathkeygen: randon -> random

2021-04-20 Thread Thomas Lamprecht
On 20.04.21 14:11, Lorenz Stechauner wrote:
> Signed-off-by: Lorenz Stechauner 
> ---
>  oathkeygen | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager] ui: qemu/Config: disable xtermjs and spice until status is loaded

2021-04-20 Thread Thomas Lamprecht
On 20.04.21 16:35, Dominik Csapak wrote:
> We enable/disable spice/xtermjs for the console button in the 'load'
> callback of the statusstore, depending on the vms capabilities,
> but until the first load there, the only safe option is novnc.
> 
> So we have to disable xtermjs and spice on start, else a click on
> the button might open a window that cannot connect to the vm.
> 
> a forum user probably triggered this:
> https://forum.proxmox.com/threads/unable-to-find-serial-interface-console-problem.87705
> 
> Signed-off-by: Dominik Csapak 
> ---
>  www/manager6/qemu/Config.js | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/www/manager6/qemu/Config.js b/www/manager6/qemu/Config.js
> index 10bf10a4..b5f8cc9c 100644
> --- a/www/manager6/qemu/Config.js
> +++ b/www/manager6/qemu/Config.js
> @@ -199,6 +199,8 @@ Ext.define('PVE.qemu.Config', {
>   disabled: !caps.vms['VM.Console'],
>   hidden: template,
>   consoleType: 'kvm',
> + enableSpice: false,
> + enableXtermjs: false,
>   consoleName: vm.name,
>   nodename: nodename,
>   vmid: vmid,
> 

adding a comment could be nice to avoid people thinking to "clean this up" in 
the future.

Anyway, applied. Related in the widest sense:

I have an issue with the default opened console viewer, a VM always opens the 
xtermjs
one when IMO the spice viewer or noVNC one should be preferred.

Setup details

* windows VM
** display spice (qxl)
** serial port added
* datacenter options for console viewer set to html5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH storage] rbd: fix typo in error message

2021-04-20 Thread Thomas Lamprecht
On 20.04.21 10:14, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/Storage/RBDPlugin.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v7 qemu-server 2/5] disk reassign: add API endpoint

2021-04-20 Thread Aaron Lauterer
The goal of this new API endpoint is to provide an easy way to move a
disk between VMs as this was only possible with manual intervention
until now. Either by renaming the VM disk or by manually adding the
disks volid to the config of the other VM.

The latter can easily cause unexpected behavior such as disks attached
to VM B would be deleted if it used to be a disk of VM A. This happens
because PVE assumes that the VMID in the volname always matches the VM
the disk is attached to and thus, would remove any disk with VMID A
when VM A was deleted.

The term `reassign` was chosen as it is not yet used
for VM disks.

Signed-off-by: Aaron Lauterer 
---

v6 -> v7:
this was a rather large change:

* added new parameter to specify target disk config key
* add check if free
* use $update_vm_api to add disk to new VM (hotplug if possible)
* renamed parameters and vars to clearly distinguish between source and
  target VMs / disk config keys
* expand description to mention that a rename works only between VMs on
  the same node
* check if target drive type supports all config parameters of the disk
* removed cluster log. was there to emulate the behavior of move_disk
  but even there it seems to log a very outdated syntax...
* reordered the reassignment procedure
1. reassign/rename volume
2. remove from source vm config
3. update target vm
4. remove potential old replication snapshots

This should help to reduce the possibilities that a disk ends up in
limbo. If the rename/reassign on the storage level fails, we haven't
changed any VM config yet. If the replication snapshot removal
fails, nothing happens to the VMs, it needs to be cleaned up
manually though.
* fixed parameter for replication snapshot removal (thx @febner for the
  hint)
* change worker ID to show which vm & disk is reassigned to which.
tried to find a way that does not interfere with the UPID parser.
AFAICT this one works okayish now. The GUI has a bit of a glitch
where it replaces - with / in the title of the tasks detail view.

v5 -> v6:
* guard Replication snapshot cleanup
additionally to the eval, that code is now only run if the volume is
on a storage with the 'replicate' feature
* add permission check for target vmid
* changed regex to match unused keys better

thx @Fabian for these suggestions/catching problems

v4 -> v5:
* implemented suggestions from Fabian [1]
* logging before action
* improving description
* improving error messages
* using Replication::prepare to remove replication snapshots
* check if disk is physical disk using /dev/...

v3 -> v4: nothing

v2 -> v3:
* reordered the locking as discussed with fabian [0] to
run checks
fork worker
lock source config
lock target config
run checks
...

* added more checks
* will not reassign to or from templates
* will not reassign if VM has snapshots present
* cleanup if disk used to be replicated
* made task log slightly more verbose
* integrated general recommendations regarding code
* renamed `disk` to `drive_key`
* prepended some vars with `source_` for easier distinction

v1 -> v2: print config key and volid info at the end of the job so it
shows up on the CLI and task log

rfc -> v1:
* add support to reassign unused disks
* add support to provide a config digest for the target vm
* add additional check if disk key is present in config
* reorder checks a bit

In order to support unused disk I had to extend
PVE::QemuServer::Drive::valid_drive_names for the API parameter
validation.

Checks are ordered so that cheap tests are run at the first chance to
fail early.

The check if both VMs are present on the node is a bit redundant because
locking the config files will fail if the VM is not present. But with
the additional check we can provide a useful error message to the user
instead of a "Configuration file xyz does not exist" error.

[0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
 PVE/API2/Qemu.pm| 220 
 PVE/QemuServer/Drive.pm |   4 +
 2 files changed, 224 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c56b609..b90a83b 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
 use PVE::VZDump::Plugin;
 use PVE::DataCenterConfig;
 use PVE::SSHInfo;
+use PVE::Replication;
 
 BEGIN {
 if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -4395,4 +4396,223 @@ __PACKAGE__->register_method({
return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, 
$param->{vmid}, $param->{type});
 }});
 
+__PACKAGE__->register_method({
+name => 'reassign_vm_disk',
+path => '{vmid}/reassign_disk',
+method => 'POST',
+protected => 1,
+proxyto => 'node',
+description => "Reassign a disk to another VM on the same node",
+

[pve-devel] [PATCH v7 qemu-server 5/5] cli: qm: change move_disk parameter to move-disk

2021-04-20 Thread Aaron Lauterer
also add alias to keep move_disk working.

Signed-off-by: Aaron Lauterer 
---

this one is optional but would align the use of - instead of _ in the
command names

 PVE/CLI/qm.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 6d78600..b629e8f 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -912,7 +912,8 @@ our $cmddef = {
 
 resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { 
node => $nodename } ],
 
-move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 
'storage'], { node => $nodename }, $upid_exit ],
+'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 
'storage'], { node => $nodename }, $upid_exit ],
+move_disk => { alias => 'move-disk' },
 
 'reassign-disk' => [ "PVE::API2::Qemu", 'reassign_vm_disk', 
['source-vmid', 'target-vmid', 'source-drive', 'target-drive'], { node => 
$nodename } ],
 
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v7 manager 4/5] ui: tasks: add qmreassign task description

2021-04-20 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
v4->v7: rebased

 www/manager6/Utils.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index f502950f..51942938 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1801,6 +1801,7 @@ Ext.define('PVE.Utils', {
qmigrate: ['VM', gettext('Migrate')],
qmmove: ['VM', gettext('Move disk')],
qmpause: ['VM', gettext('Pause')],
+   qmreassign: ['VM', gettext('Reassign disk')],
qmreboot: ['VM', gettext('Reboot')],
qmreset: ['VM', gettext('Reset')],
qmrestore: ['VM', gettext('Restore')],
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v7 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command

2021-04-20 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
v6 -> v7:
* added target drive parameter
* renamed parameters to include source/target
* use - instead of _ in command name

v5 -> v6: nothing
v4 -> v5: renamed `drive_key` to `drive_name`
v3 ->v4: nothing
v2 -> v3: renamed parameter `disk` to `drive_key`
rfc -> v1 -> v2: nothing changed

 PVE/CLI/qm.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index f8972bd..6d78600 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -914,6 +914,8 @@ our $cmddef = {
 
 move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 
'storage'], { node => $nodename }, $upid_exit ],
 
+'reassign-disk' => [ "PVE::API2::Qemu", 'reassign_vm_disk', 
['source-vmid', 'target-vmid', 'source-drive', 'target-drive'], { node => 
$nodename } ],
+
 unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
 
 config => [ "PVE::API2::Qemu", 'vm_config', ['vmid'],
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v7 series 0/5] disk reassign: add new feature

2021-04-20 Thread Aaron Lauterer
This series implements a new feature which allows users to easily
reassign disks between VMs. Currently this is only possible with one of
the following manual steps:

* rename the disk image/file and do a `qm rescan`
* configure the disk manually and use the old image name, having an
image for VM A assigned to VM B

The latter can cause unexpected behavior because PVE expects that the
VMID in a disk name always corresponds to the VM it is assigned to. Thus
when a disk, original from VM A was manually configured as disk for VM B
it happens that, when deleting VM A, the disk in question will be
deleted as well because it still had the VMID of VM A in it's name.

To issue a reassign from the CLI run:

qm reassign-disk

where  is the config key of the disk, e.g. ide0,
scsi1 and so on.

The following storage types are implemented at the moment:
* dir based ones
* ZFS
* (thin) LVM
* Ceph RBD

v6 -> v7:
This was a rather large change to the previous version. I hope I
incorporated all suggestions and hints and did not miss anything.
More details are usually found in the direct patches.

* removed original patch 4 as it is not needed (thx @febner for the hint)
* added another (optional) patch to align move_disk to use a dash
  instead of an underscore
* make sure storage is activated
* restructure storage plugins so that dir based ones are handled
  directly in plugin.pm with API version checks for external plugins
* add target disk key
* use update_vm_api to add the disk to the new VM (hotplug if possible)
* removed cluster log
* reordered reassing procedure
* changed worker ID to show source and target better

v5 -> v6:
* guard Replication snapshot cleanup
* add permission check for target vmid
* changed regex to match unused keys better
* refactor dir based feature check to reduce code repetition

v4 -> v5:
* rebase on current master
* reorder patches
* rename `drive_key` to `drive_name`
thanks @Dominic for pointing out that there already are a lot of
different names in use for this [0] and not to invent another one 
* implemented suggested changes from Fabian [1][2]. More directly in the
patches themselves

v3 -> v4:
* revert intermediate storage plugin for directory based plugins
* add a `die "not supported"` method in Plugin.pm
* dir based plugins now call the file_reassign_volume method in
  Plugin.pm as the generic file/directory based method
* restored old `volume_has_feature` method in Plugin.pm and override it
  in directory based plugins to check against the new `reassign` feature
  (not too happy about the repetition for each plugin)
* task description mapping has been moved from widget-toolkit to
  pve-manager/utils


v2 -> v3:
* change locking approach
* add more checks
* add intermedia storage plugin for directory based plugins
* use feature flags
* split up the reassign method to have a dedicated method for the
renaming itself
* handle linked clones
* clean up if disk used to be replicated

I hope I didn't forget anything major.

v1 -> v2:
print info about the new disk volid and key at the end of the job so it
shows up in the CLI output and task log

Changes from RFC -> V1:
* support to reassign unused disks
* digest for target vm config
* reorder the checks a bit
* adding another one to check if the given key for the disk even exists
  in the config.

[0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/045986.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
[2] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html

storage: Aaron Lauterer (1):
  add disk reassign feature

 PVE/Storage.pm   | 19 +++--
 PVE/Storage/LVMPlugin.pm | 34 +++
 PVE/Storage/LvmThinPlugin.pm |  1 +
 PVE/Storage/Plugin.pm| 52 
 PVE/Storage/RBDPlugin.pm | 37 +
 PVE/Storage/ZFSPoolPlugin.pm | 38 ++
 6 files changed, 179 insertions(+), 2 deletions(-)

qemu-server: Aaron Lauterer (3):
  disk reassign: add API endpoint
  cli: disk reassign: add reassign_disk to qm command
  cli: qm: change move_disk parameter to move-disk

 PVE/API2/Qemu.pm| 220 
 PVE/CLI/qm.pm   |   5 +-
 PVE/QemuServer/Drive.pm |   4 +
 3 files changed, 228 insertions(+), 1 deletion(-)

manager: Aaron Lauterer (1):
  ui: tasks: add qmreassign task description

 www/manager6/Utils.js | 1 +
 1 file changed, 1 insertion(+)

-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v7 storage 1/5] add disk reassign feature

2021-04-20 Thread Aaron Lauterer
Functionality has been added for the following storage types:

* dir based ones
* ZFS
* (thin) LVM
* Ceph

A new feature `reassign` has been introduced to mark which storage
plugin supports the feature.

Version API and AGE have been bumped.

Signed-off-by: Aaron Lauterer 
---
v6 -> v7:
We now place everything for dis based plugins in the Plugin.pm and check
against the supported API version to avoid running the code on external
plugins that do not yet officially support the reassign feature.

* activate storage before doing anything else
* checks if storage is enabled as well
* code cleanup
* change long function calls to multiline
* base parameter is not passed to rename function anymore but
  handled in the reassign function
* prefixed vars with source_ / target_ to make them easier to
  distinguish

v5 -> v6:
* refactor dir based feature check to reduce code repetition by
  introducing the file_can_reassign_volume sub that does the actual check

v4 -> v5:
* rebased on master
* bumped api ver and api age
* rephrased "not implemented" message as suggested [0].

v3 -> v4:
* revert intermediate storage plugin for directory based plugins
* add a `die "not supported"` method in Plugin.pm
* dir based plugins now call the file_reassign_volume method in
  Plugin.pm as the generic file/directory based method
* restored old `volume_has_feature` method in Plugin.pm and override it
  in directory based plugins to check against the new `reassign` feature
  (not too happy about the repetition for each plugin)

v2 -> v3:
* added feature flags instead of dummy "not implemented" methods in
  plugins which do not support it as that would break compatibility with
  3rd party plugins.
  Had to make $features available outside the `has_features` method in
  Plugins.pm in order to be able to individually add features in the
  `BaseDirPlugin.pm`.
* added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
  this is explained in the commit message
* moved the actual renaming from the `reassign_volume` to a dedicated
  `rename_volume` method to make this functionality available to other
  possible uses in the future.
* added support for linked clones ($base)


rfc -> v1 -> v2: nothing changed

[0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html




 PVE/Storage.pm   | 19 +++--
 PVE/Storage/LVMPlugin.pm | 34 +++
 PVE/Storage/LvmThinPlugin.pm |  1 +
 PVE/Storage/Plugin.pm| 52 
 PVE/Storage/RBDPlugin.pm | 37 +
 PVE/Storage/ZFSPoolPlugin.pm | 38 ++
 6 files changed, 179 insertions(+), 2 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 122c3e9..ea782cc 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -41,11 +41,11 @@ use PVE::Storage::DRBDPlugin;
 use PVE::Storage::PBSPlugin;
 
 # Storage API version. Increment it on changes in storage API interface.
-use constant APIVER => 8;
+use constant APIVER => 9;
 # Age is the number of versions we're backward compatible with.
 # This is like having 'current=APIVER' and age='APIAGE' in libtool,
 # see 
https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 7;
+use constant APIAGE => 8;
 
 # load standard plugins
 PVE::Storage::DirPlugin->register();
@@ -349,6 +349,7 @@ sub volume_snapshot_needs_fsfreeze {
 #snapshot - taking a snapshot is possible
 #sparseinit - volume is sparsely initialized
 #template - conversion to base image is possible
+#reassign - reassigning disks to other guest is possible
 # $snap - check if the feature is supported for a given snapshot
 # $running - if the guest owning the volume is running
 # $opts - hash with further options:
@@ -1843,6 +1844,20 @@ sub complete_volume {
 return $res;
 }
 
+sub reassign_volume {
+my ($cfg, $volid, $target_vmid) = @_;
+
+my ($storeid, $volname) = parse_volume_id($volid);
+
+activate_storage($cfg, $storeid);
+
+my $scfg = storage_config($cfg, $storeid);
+my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+
+return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
 # Various io-heavy operations require io/bandwidth limits which can be
 # configured on multiple levels: The global defaults in datacenter.cfg, and
 # per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index df49b76..ff169f6 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -339,6 +339,13 @@ sub lvcreate {
 run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
 }
 
+sub lvrename {
+my ($vg, $oldname, $newname) = @_;
+
+my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
+run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' 
error");
+}
+
 sub alloc_ima

[pve-devel] [RFC pve-kernel-meta 1/2] proxmox-boot-tool: rename from pve-efiboot-tool

2021-04-20 Thread Stoiko Ivanov
We will be using the mechanics also for ZFS systems booting with BIOS
legacy boot, and the tool is used also in PMG and PBS.

A symlink is kept in place for compatibility reasons

Signed-off-by: Stoiko Ivanov 
---
 Makefile| 2 +-
 bin/Makefile| 2 +-
 bin/{pve-efiboot-tool => proxmox-boot-tool} | 0
 debian/pve-kernel-helper.install| 2 +-
 debian/pve-kernel-helper.links  | 1 +
 {efiboot => proxmox-boot}/Makefile  | 0
 {efiboot => proxmox-boot}/functions | 0
 {efiboot => proxmox-boot}/pve-auto-removal  | 0
 {efiboot => proxmox-boot}/pve-efiboot-sync  | 2 +-
 {efiboot => proxmox-boot}/zz-pve-efiboot| 0
 10 files changed, 5 insertions(+), 4 deletions(-)
 rename bin/{pve-efiboot-tool => proxmox-boot-tool} (100%)
 create mode 100644 debian/pve-kernel-helper.links
 rename {efiboot => proxmox-boot}/Makefile (100%)
 rename {efiboot => proxmox-boot}/functions (100%)
 rename {efiboot => proxmox-boot}/pve-auto-removal (100%)
 rename {efiboot => proxmox-boot}/pve-efiboot-sync (84%)
 rename {efiboot => proxmox-boot}/zz-pve-efiboot (100%)

diff --git a/Makefile b/Makefile
index 0b62b3e..90d5989 100644
--- a/Makefile
+++ b/Makefile
@@ -13,7 +13,7 @@ BUILD_DIR=build
 
 DEBS=${KERNEL_DEB} ${HEADERS_DEB} ${HELPER_DEB}
 
-SUBDIRS = efiboot bin
+SUBDIRS = proxmox-boot bin
 
 .PHONY: all
 all: ${SUBDIRS}
diff --git a/bin/Makefile b/bin/Makefile
index 058c86f..b78fa42 100644
--- a/bin/Makefile
+++ b/bin/Makefile
@@ -5,7 +5,7 @@ all:
 
 install:
install -d ${SBINDIR}
-   install -m 0755 pve-efiboot-tool ${SBINDIR}/
+   install -m 0755 proxmox-boot-tool ${SBINDIR}/
 
 .PHONY: clean distclean
 distclean:
diff --git a/bin/pve-efiboot-tool b/bin/proxmox-boot-tool
similarity index 100%
rename from bin/pve-efiboot-tool
rename to bin/proxmox-boot-tool
diff --git a/debian/pve-kernel-helper.install b/debian/pve-kernel-helper.install
index 6f7f713..82a9672 100644
--- a/debian/pve-kernel-helper.install
+++ b/debian/pve-kernel-helper.install
@@ -1,5 +1,5 @@
 etc/kernel/postinst.d/*
 etc/kernel/postrm.d/*
 etc/initramfs/post-update.d/pve-efiboot-sync
-usr/sbin/pve-efiboot-tool
+usr/sbin/proxmox-boot-tool
 usr/share/pve-kernel-helper/scripts/functions
diff --git a/debian/pve-kernel-helper.links b/debian/pve-kernel-helper.links
new file mode 100644
index 000..70bf372
--- /dev/null
+++ b/debian/pve-kernel-helper.links
@@ -0,0 +1 @@
+/usr/sbin/proxmox-boot-tool /usr/sbin/pve-efiboot-tool
diff --git a/efiboot/Makefile b/proxmox-boot/Makefile
similarity index 100%
rename from efiboot/Makefile
rename to proxmox-boot/Makefile
diff --git a/efiboot/functions b/proxmox-boot/functions
similarity index 100%
rename from efiboot/functions
rename to proxmox-boot/functions
diff --git a/efiboot/pve-auto-removal b/proxmox-boot/pve-auto-removal
similarity index 100%
rename from efiboot/pve-auto-removal
rename to proxmox-boot/pve-auto-removal
diff --git a/efiboot/pve-efiboot-sync b/proxmox-boot/pve-efiboot-sync
similarity index 84%
rename from efiboot/pve-efiboot-sync
rename to proxmox-boot/pve-efiboot-sync
index c3ccf8e..21adc85 100644
--- a/efiboot/pve-efiboot-sync
+++ b/proxmox-boot/pve-efiboot-sync
@@ -7,5 +7,5 @@ set -e
 # this variable will be set to 1 and we do nothing, since our pve-kernel
 # hooks will update the ESPs all at once anyway.
 if [ -z "$INITRAMFS_TOOLS_KERNEL_HOOK" ]; then
-   /usr/sbin/pve-efiboot-tool refresh --hook 'zz-pve-efiboot'
+   /usr/sbin/proxmox-boot-tool refresh --hook 'zz-pve-efiboot'
 fi
diff --git a/efiboot/zz-pve-efiboot b/proxmox-boot/zz-pve-efiboot
similarity index 100%
rename from efiboot/zz-pve-efiboot
rename to proxmox-boot/zz-pve-efiboot
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [RFC pve-kernel-meta 2/2] proxmox-boot-tool: handle legacy boot zfs installs

2021-04-20 Thread Stoiko Ivanov
This patch adds support for booting non-uefi/legacy/bios-boot ZFS
installs, by using proxmox-boot-tool to copy the kernels to the ESP
and then generate a fitting grub config for booting from the vfat ESP:

* grub is installed onto the ESP and the MBR points to the ESP
* after copying/deleting the kernels proxmox-boot-tool bindmounts the
  ESP on /boot (inside the new mount namespace)
* grub-update then manages to generate a fitting config.

Some paths/sanity-checks needed adaptation (to differentiate between
EFI boot and not (based on the existence of /sys/firmware/efi)

The approach is inspired by @avw in our community-forum [0].

[0] 
https://forum.proxmox.com/threads/zfs-error-no-such-device-error-unknown-filesystem-entering-rescue-mode.75122/post-374799

Signed-off-by: Stoiko Ivanov 
---
best viewed with `git show -w`

 bin/proxmox-boot-tool   | 21 ++
 proxmox-boot/zz-pve-efiboot | 81 ++---
 2 files changed, 70 insertions(+), 32 deletions(-)

diff --git a/bin/proxmox-boot-tool b/bin/proxmox-boot-tool
index f57a752..dd23231 100755
--- a/bin/proxmox-boot-tool
+++ b/bin/proxmox-boot-tool
@@ -150,14 +150,19 @@ init() {
echo "Mounting '$part' on '$esp_mp'."
mount -t vfat "$part" "$esp_mp"
 
-   echo "Installing systemd-boot.."
-   mkdir -p "$esp_mp/$PMX_ESP_DIR"
-   bootctl --path "$esp_mp" install
-
-   echo "Configuring systemd-boot.."
-   echo "timeout 3" > "$esp_mp/$PMX_LOADER_CONF.tmp"
-   echo "default proxmox-*" >> "$esp_mp/$PMX_LOADER_CONF.tmp"
-   mv "$esp_mp/$PMX_LOADER_CONF.tmp" "$esp_mp/$PMX_LOADER_CONF"
+   if [ -d /sys/firmware/efi ]; then
+   echo "Installing systemd-boot.."
+   mkdir -p "$esp_mp/$PMX_ESP_DIR"
+   bootctl --path "$esp_mp" install
+
+   echo "Configuring systemd-boot.."
+   echo "timeout 3" > "$esp_mp/$PMX_LOADER_CONF.tmp"
+   echo "default proxmox-*" >> "$esp_mp/$PMX_LOADER_CONF.tmp"
+   mv "$esp_mp/$PMX_LOADER_CONF.tmp" "$esp_mp/$PMX_LOADER_CONF"
+   else
+   echo "Installing grub i386-pc target.."
+   grub-install --boot-directory $esp_mp --target i386-pc 
"/dev/$PKNAME"
+   fi
echo "Unmounting '$part'."
umount "$part"
 
diff --git a/proxmox-boot/zz-pve-efiboot b/proxmox-boot/zz-pve-efiboot
index 1c4ad73..1ce89f7 100755
--- a/proxmox-boot/zz-pve-efiboot
+++ b/proxmox-boot/zz-pve-efiboot
@@ -76,18 +76,30 @@ update_esp_func() {
{ warn "creation of mountpoint ${mountpoint} failed - 
skipping"; return; }
mount "${path}" "${mountpoint}" || \
{ warn "mount of ${path} failed - skipping"; return; }
-   if [ ! -f "${mountpoint}/$PMX_LOADER_CONF" ]; then
-   warn "${path} contains no loader.conf - skipping"
-   return
-   fi
-   if [ ! -d "${mountpoint}/$PMX_ESP_DIR" ]; then
-   warn "${path}/$PMX_ESP_DIR does not exist- skipping"
+   if [ -d /sys/firmware/efi ]; then
+   if [ ! -f "${mountpoint}/$PMX_LOADER_CONF" ]; then
+   warn "${path} contains no loader.conf - skipping"
+   return
+   fi
+   if [ ! -d "${mountpoint}/$PMX_ESP_DIR" ]; then
+   warn "${path}/$PMX_ESP_DIR does not exist- skipping"
+   return
+   fi
+   elif [ ! -d "${mountpoint}/grub" ]; then
+   warn "${path} contains no grub directory - skipping"
return
fi
-
warn "Copying and configuring kernels on ${path}"
copy_and_config_kernels "${mountpoint}"
-   remove_old_kernels "${mountpoint}"
+   if [ -d /sys/firmware/efi ]; then
+   remove_old_kernels_efi "${mountpoint}"
+   else
+   remove_old_kernels_legacy "${mountpoint}"
+   mount --bind "${mountpoint}" "/boot"
+   update-grub
+   umount /boot
+
+   fi
 
umount "${mountpoint}" || \
{ warn "umount of ${path} failed - failure"; exit 0; }
@@ -113,26 +125,33 @@ copy_and_config_kernels() {
continue
fi
 
-   warn "  Copying kernel and creating boot-entry for ${kver}"
-   KERNEL_ESP_DIR="${PMX_ESP_DIR}/${kver}"
-   KERNEL_LIVE_DIR="${esp}/${KERNEL_ESP_DIR}"
-   mkdir -p "${KERNEL_LIVE_DIR}"
-   cp -u --preserve=timestamps "${linux_image}" 
"${KERNEL_LIVE_DIR}/"
-   cp -u --preserve=timestamps "${initrd}" "${KERNEL_LIVE_DIR}/"
-
-   # create loader entry
-   cat > "${esp}/loader/entries/proxmox-${kver}.conf" <<- EOF
-   title${LOADER_TITLE}
-   version  ${kver}
-   options   ${CMDLINE}
-   linux/${KERNEL_ESP_DIR}/vmlinuz-${kver}
-   initrd   /${KE

[pve-devel] [RFC pve-kernel-meta 0/2] boot ZFS on legacy BIOS systems from vfat

2021-04-20 Thread Stoiko Ivanov
This patchset has been long overdue, and complements the solution to booting
ZFS on UEFI systems using systemd-boot.

With the upgrade of ZFS 2.0.0 (and it's support for ZSTD compression), quite
a few users found out that their systems were still booted with legacy bios
boot and were consequently rendered unbootable with enabling zstd
compression on (a dataset on rpool).

The solution is inspired by our community-forum, especially @avw, and seems
rather lightweight (patch 2/2 is best viewed with '-w').
My first approach was to generate a working grub-config ourselves, but
given that grub has a few years of handling special cases - bind-mounting
the ESP on /boot and running 'update-grub' seems like a less painful way.

* patch 1/2 renames pve-efiboot-tool to proxmox-boot-tool (which seems more
appropriate by now)
* patch 2/2 adds support for installing grub appropriately on the ESPs
  and running the kernel sync-logic in a way that update-grub feels fine
  with

Sending as RFC, because this is a proof-of-concept and missing quite a few
things.

What works:
* installing this version on a root ZFS RAID-Z@ PVE (based on an old pre 6.2
  install)
* reformatting all 4 ESPs `proxmox-boot-tool format /dev/sda2 --force)
* initializing them
* rebooting into 5.4.106 and zfs 2.0.0
* upgrading the pool, setting compression=zstd, writing a file, rebooting
  (successfully)
* rebooting into an old 5.3 kernel - and getting greeted by busy-box instead
  of grub-rescue

What's missing (at least):
* support in the installer
* the renaming is not quite through (the kernel-hooks are still containing
  pve/efi in their name)
* testing the removal part of the kernel-sync


Stoiko Ivanov (2):
  proxmox-boot-tool: rename from pve-efiboot-tool
  proxmox-boot-tool: handle legacy boot zfs installs

 Makefile|  2 +-
 bin/Makefile|  2 +-
 bin/{pve-efiboot-tool => proxmox-boot-tool} | 21 --
 debian/pve-kernel-helper.install|  2 +-
 debian/pve-kernel-helper.links  |  1 +
 {efiboot => proxmox-boot}/Makefile  |  0
 {efiboot => proxmox-boot}/functions |  0
 {efiboot => proxmox-boot}/pve-auto-removal  |  0
 {efiboot => proxmox-boot}/pve-efiboot-sync  |  2 +-
 {efiboot => proxmox-boot}/zz-pve-efiboot| 81 +++--
 10 files changed, 75 insertions(+), 36 deletions(-)
 rename bin/{pve-efiboot-tool => proxmox-boot-tool} (94%)
 create mode 100644 debian/pve-kernel-helper.links
 rename {efiboot => proxmox-boot}/Makefile (100%)
 rename {efiboot => proxmox-boot}/functions (100%)
 rename {efiboot => proxmox-boot}/pve-auto-removal (100%)
 rename {efiboot => proxmox-boot}/pve-efiboot-sync (84%)
 rename {efiboot => proxmox-boot}/zz-pve-efiboot (69%)

-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [pve-manager] Adding real disk usage information (discussion)

2021-04-20 Thread Bruce Wainer
Dominik,
Thank you for the insight. There is certainly complexity I did not
consider, even if I were to look at the narrow case of local ZFS storage.
Regardless, this would be helpful to me and if I make anything then I will
submit it. I already have signed the CLA and have code accepted in
pve-zsync.
Thank you,
Bruce

On Mon, Apr 19, 2021 at 2:55 AM Dominik Csapak  wrote:

> On 4/16/21 22:18, Bruce Wainer wrote:
> > Hello,
> >
>
> Hi,
>
> > I am interested in seeing real disk usage information for VM Disks and CT
> > Volumes, on storage types that have thin provisioning and/or snapshots.
> > Specifically I would like to see "Current Disk Usage (Thin)" and either
> > "Snapshot Usage" or "Total Disk Usage". I only use local ZFS on servers
> at
> > this time, but I'm sure the GUI side would be best made flexible.
>
> while this sounds sensible, this will get hard very fast.
> For example, take a LVM-Thin storage.
>
> I have a template which has an LV which uses some space.
> This can have X linked clones, where each clone can have Y snapshots.
>
> since lvmthin lvs/snapshots/etc. are only very loosely coupled.
> It is very hard to attribute the correct number to any
> of those vms/templates. (e.g. do you want to calculate the
> template storage again for each vm? only once? what if
> you cloned a vm from a snapshot?)
>
> It gets even harder on storage that can deduplicate (e.g. ZFS) or
> where the 'real' usage is dynamically inflated by some form of replica
> (e.g. Ceph).
>
> So, while this sounds nice, and we would probably not oppose a clean
> solution, this is not a trivial problem to solve.
>
> >
> > Is someone interested in helping with this? Where would I start,
> especially
> > on the GUI part, if I were to develop this myself?
>
> anyway, to answer this question, the storage plugins in the backend can
> be found in the pve-storage git repo[0]
>
> the point where the status api calls of the vms/cts are called live
> in qemu-server[1] and pve-container[2] respectively
> (the api part is in PVE/API2/)
>
> you can find the gui part in pve-manger[3] in www/manager6
>
> also if you want to send patches, please read the developer
> documentation [4] especially the bit about the CLA
>
> if you have any more question, please ask :)
>
> hope this helps
> kind regards
>
> 0:
>
> https://git.proxmox.com/?p=pve-storage.git;a=tree;f=PVE/Storage;h=fd53af5e74407deda65785b164fb61a4f644a6e0;hb=refs/heads/master
> 1: https://git.proxmox.com/?p=qemu-server.git;a=summary
> 2: https://git.proxmox.com/?p=pve-container.git;a=summary
> 3:
>
> https://git.proxmox.com/?p=pve-manager.git;a=tree;f=www/manager6;hb=refs/heads/master
> 4: https://pve.proxmox.com/wiki/Developer_Documentation
>
> >
> > Thank you,
> > Bruce Wainer
> > ___
> > pve-devel mailing list
> > pve-devel@lists.proxmox.com
> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >
> >
>
>
>
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] applied: [PATCH manager] ui: qemu/Config: disable xtermjs and spice until status is loaded

2021-04-20 Thread Dominik Csapak

On 4/20/21 18:20, Thomas Lamprecht wrote:

On 20.04.21 16:35, Dominik Csapak wrote:

We enable/disable spice/xtermjs for the console button in the 'load'
callback of the statusstore, depending on the vms capabilities,
but until the first load there, the only safe option is novnc.

So we have to disable xtermjs and spice on start, else a click on
the button might open a window that cannot connect to the vm.

a forum user probably triggered this:
https://forum.proxmox.com/threads/unable-to-find-serial-interface-console-problem.87705

Signed-off-by: Dominik Csapak 
---
  www/manager6/qemu/Config.js | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/www/manager6/qemu/Config.js b/www/manager6/qemu/Config.js
index 10bf10a4..b5f8cc9c 100644
--- a/www/manager6/qemu/Config.js
+++ b/www/manager6/qemu/Config.js
@@ -199,6 +199,8 @@ Ext.define('PVE.qemu.Config', {
disabled: !caps.vms['VM.Console'],
hidden: template,
consoleType: 'kvm',
+   enableSpice: false,
+   enableXtermjs: false,
consoleName: vm.name,
nodename: nodename,
vmid: vmid,



adding a comment could be nice to avoid people thinking to "clean this up" in 
the future.


yes, i'll add a comment



Anyway, applied. Related in the widest sense:

I have an issue with the default opened console viewer, a VM always opens the 
xtermjs
one when IMO the spice viewer or noVNC one should be preferred.

Setup details

* windows VM
** display spice (qxl)
** serial port added
* datacenter options for console viewer set to html5



weird, works for me differently:

* datacenter html5
* vm with display spice + serial port
* opens novnc

i'll investigate


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel