Re: [pve-devel] [RFC manager 2/2] ui: dc: backup: Add detail dialog, disks included

2019-12-20 Thread Thomas Lamprecht
Am 12/12/19 um 11:27 AM schrieb Aaron Lauterer:
> Adds a 'Detail' button which opens a new dialog to show which disks
> and mountpoints will be included the selected backup job.
> 
> Signed-off-by: Aaron Lauterer 
> ---
> 
> For this RFC the new detail dialog is a bit bare and only shows the
> treepanel for the included status.
> 
> It probably would be nice to have a short summary above the tree panel
> that shows the other details of the job as well.

Yes, the next scheduled run and target storage would be nice to see, maybe
even the last really scheduled run (would need some more backend changes to
find that info (maybe from task logs)) could be nice to have.

> If we do not want that
> we should find a better name instead of `Detail` to indicate that it
> will only show what is included in the backup.
> 
>  www/manager6/Utils.js |  23 +++
>  www/manager6/dc/Backup.js | 124 +-
>  2 files changed, 146 insertions(+), 1 deletion(-)
> 
> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
> index e61e2693..81127013 100644
> --- a/www/manager6/Utils.js
> +++ b/www/manager6/Utils.js
> @@ -207,6 +207,29 @@ Ext.define('PVE.Utils', { utilities: {
>  
>  },
>  
> +render_disk_backup_status: function(value) {
> + if (typeof value == 'undefined') {
> + return "";
> + }
> +
> + let iconCls = 'check-circle good';
> + let text = gettext('Yes');
> +
> + switch (value) {
> + case 'false':
> + iconCls = 'times-circle critical';
> + text = gettext('No');
> + break;
> + case 'not possible':
> + iconCls = 'minus-circle warning';
> + text = gettext('Not possible');
> + break;
> + default: //unknown
> + }
> +
> + return ' ' + text;

With 6.0 we can use ES6 as we dropped support for the IE11 browser. There
you can use template literals for above to reduce some syntax noise:

return ` ${text}`

but not too hard feeling here :) There are a few other places here
which could make use of this.

[0]: 
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals

> +},
> +
>  get_kvm_osinfo: function(value) {
>   var info = { base: 'Other' }; // default
>   if (value) {
> diff --git a/www/manager6/dc/Backup.js b/www/manager6/dc/Backup.js
> index 28275f01..f3f5faa1 100644
> --- a/www/manager6/dc/Backup.js
> +++ b/www/manager6/dc/Backup.js
> @@ -377,6 +377,88 @@ Ext.define('PVE.dc.BackupEdit', {
>  });
>  
>  
> +Ext.define('PVE.dc.BackupDiskTree', {
> +extend: 'Ext.tree.Panel',
> +xtype: 'pveBackDiskTree',

this xtype name is really not telling.. it should always follow the class name,
so why dropping the "up" from Backup? We normally also use alias and do not set
the xtype directly, maybe:

alias: 'widget.pveBackupDetailTree'

add empty line separation between the class metainformation definitions (above)
and it's config (below).

> +stateful: true,
> +stateId: 'grid-node-backupinfo',

for what do you use the stateful here? IMO it's not a good idea to save the
sate of each backup details window in the state managers backing storage, lots
of data for no reak gain (especially if you add a collapse/expand all tool,
like suggested bellow)

> +folderSort: true,
> +rootVisible: false,
> +
> +columns: [
> + {
> + xtype: 'treecolumn',
> + text: gettext('Guest/Disk/Mount point'),

Maybe simply one of:
* 'Guest Volume'
* 'Guest Image'


> + dataIndex: 'id',
> + flex: 1,
> + },
> + {
> + text: gettext('Guest Type'),

We use just 'Type' in the VMIDSelector used for various places (Bulk action,
DC->Backup Add, HA Add, ..) so I'd like to use that here too, for consistency.

> + dataIndex: 'type',
> + width: 120,
> + },
> + {
> + text: gettext('Included in backups'),

IMO using 'Backup Job' could be enough for a column here, else it somewhat
suggest that there are existing backups of that volume already out there -
why may not be the case.

> + renderer: PVE.Utils.render_disk_backup_status,
> + dataIndex: 'status',
> + width: 150,

I'd flex this to, normally I try to flex all columns and keep only those
fixed which have only a checkbox, or boolean status, as else even translations
can be "to long" - while we do not can cater for all of them, I'd like to have
some good defaults which scale automatically for most.

Here one could _maybe_ (as in I did not tested it, but played possibly to often
with those to have a rough feeling what could be OK ^^) do:
id -> flex: 7
type   -> flex: 1
status -> flex: 2

One reason for this is that I would like to have the reason for "not possible"
included in the column (e.g., "Not possible - Bindmount")

> + }
> +],
> +
> +reload: function() {
> + var me = this;
> + var sm = me.getSelectionModel();
> +
> + Proxmox.Utils.API2Reques

[pve-devel] [PATCH doc] backup restore: Fix syntax for bwlimit example

2019-12-20 Thread Dominic Jäger
Signed-off-by: Dominic Jäger 
---
 vzdump.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/vzdump.adoc b/vzdump.adoc
index fd7f13f..404ad09 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -200,7 +200,7 @@ time, thus we implemented the possibility to set a default 
bandwidth limit
 per configured storage, this can be done with:
 
 
-# pvesm set STORAGEID --bwlimit KIBs
+# pvesm set STORAGEID --bwlimit restore=KIBs
 
 
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage] cifs-plugin: Add bwlimit storage option

2019-12-20 Thread Dominic Jäger
This is already implemented in all other storage plugins.

Signed-off-by: Dominic Jäger 
---
 PVE/Storage/CIFSPlugin.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
index 582f99b..6115a96 100644
--- a/PVE/Storage/CIFSPlugin.pm
+++ b/PVE/Storage/CIFSPlugin.pm
@@ -127,6 +127,7 @@ sub options {
domain => { optional => 1},
smbversion => { optional => 1},
mkdir => { optional => 1 },
+   bwlimit => { optional => 1 },
 };
 }
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage] cifs-plugin: Add bwlimit storage option

2019-12-20 Thread Thomas Lamprecht
Am 12/20/19 um 10:12 AM schrieb Dominic Jäger:
> This is already implemented in all other storage plugins.
> 
> Signed-off-by: Dominic Jäger 
> ---
>  PVE/Storage/CIFSPlugin.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
> index 582f99b..6115a96 100644
> --- a/PVE/Storage/CIFSPlugin.pm
> +++ b/PVE/Storage/CIFSPlugin.pm
> @@ -127,6 +127,7 @@ sub options {
>   domain => { optional => 1},
>   smbversion => { optional => 1},
>   mkdir => { optional => 1 },
> + bwlimit => { optional => 1 },
>  };
>  }
>  
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH doc] backup restore: Fix syntax for bwlimit example

2019-12-20 Thread Thomas Lamprecht
Am 12/20/19 um 10:09 AM schrieb Dominic Jäger:
> Signed-off-by: Dominic Jäger 
> ---
>  vzdump.adoc | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/vzdump.adoc b/vzdump.adoc
> index fd7f13f..404ad09 100644
> --- a/vzdump.adoc
> +++ b/vzdump.adoc
> @@ -200,7 +200,7 @@ time, thus we implemented the possibility to set a 
> default bandwidth limit
>  per configured storage, this can be done with:
>  
>  
> -# pvesm set STORAGEID --bwlimit KIBs
> +# pvesm set STORAGEID --bwlimit restore=KIBs
>  
>  
>  
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 qemu-server 2/3] add error handling to vmconfig_apply_pending

2019-12-20 Thread Thomas Lamprecht
Am 12/13/19 um 12:41 PM schrieb Oguz Bektas:
> ---
>  PVE/API2/Qemu.pm  | 6 +++---
>  PVE/QemuServer.pm | 9 -
>  2 files changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 3c7ef30..baa96f2 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -1241,13 +1241,13 @@ my $update_vm_api  = sub {
>  
>   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
>  
> + my $errors = {};
>   if ($running) {
> - my $errors = {};
>   PVE::QemuServer::vmconfig_hotplug_pending($vmid, $conf, 
> $storecfg, $modified, $errors);
> - raise_param_exc($errors) if scalar(keys %$errors);
>   } else {
> - PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> $storecfg, $running);
> + PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> $storecfg, $running, $errors);
>   }
> + raise_param_exc($errors) if scalar(keys %$errors);
>  
>   return;
>   };
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 9c89be5..ed6b557 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -4977,7 +4977,14 @@ sub vmconfig_delete_or_detach_drive {
>  
>  
>  sub vmconfig_apply_pending {
> -my ($vmid, $conf, $storecfg) = @_;
> +my ($vmid, $conf, $storecfg, $errors) = @_;
> +
> +my $add_apply_error = sub {
> + my ($opt, $msg) = @_;
> + my $err_msg = "unable to apply pending change $opt : $msg";
> + $errors->{$opt} = $err_msg;
> + warn $err_msg;
> +};
>  
>  # cold plug
>  
> 

... missing the actual error handling, which you squashed into 3/3, both
are now not patches who do one thing stand-alone.
Please do all the parts from 3/3 which do _not_ remove a write/load config
in this patch, i.e., the

> if (my $err = $@) {
> $add_apply_error->($opt, $err);

and "fixing" eval guard scopes parts..

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH v2 qemu-server 1/3] hotplug_pending: remove redundant write/load config calls

2019-12-20 Thread Thomas Lamprecht
Am 12/13/19 um 12:41 PM schrieb Oguz Bektas:
> instead of writing the config after every change, we can do it once for
> all the changes in the end to avoid redundant i/o.
> 
> we also don't need to load_config after writing fastplug changes.
> 
> Signed-off-by: Oguz Bektas 
> ---
>  PVE/QemuServer.pm | 9 ++---
>  1 file changed, 2 insertions(+), 7 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 09a1559..9c89be5 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -4779,7 +4779,6 @@ sub vmconfig_hotplug_pending {
>  
>  if ($changes) {
>   PVE::QemuConfig->write_config($vmid, $conf);
> - $conf = PVE::QemuConfig->load_config($vmid); # update/reload
>  }
>  
>  my $hotplug_features = parse_hotplug_features(defined($conf->{hotplug}) 
> ? $conf->{hotplug} : '1');
> @@ -4839,11 +4838,8 @@ sub vmconfig_hotplug_pending {
>   if (my $err = $@) {
>   &$add_error($opt, $err) if $err ne "skip\n";
>   } else {
> - # save new config if hotplug was successful
>   delete $conf->{$opt};
>   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
> - PVE::QemuConfig->write_config($vmid, $conf);
> - $conf = PVE::QemuConfig->load_config($vmid); # update/reload
>   }
>  }
>  
> @@ -4931,13 +4927,12 @@ sub vmconfig_hotplug_pending {
>   if (my $err = $@) {
>   &$add_error($opt, $err) if $err ne "skip\n";
>   } else {
> - # save new config if hotplug was successful
>   $conf->{$opt} = $value;
>   delete $conf->{pending}->{$opt};
> - PVE::QemuConfig->write_config($vmid, $conf);
> - $conf = PVE::QemuConfig->load_config($vmid); # update/reload
>   }
>  }
> +
> +PVE::QemuConfig->write_config($vmid, $conf);
>  }
>  
>  sub try_deallocate_drive {
> 

applied this one, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel