Re: [pve-devel] [PATCH manager] storage GUI: fix unintuitive sorting order
Am 18.02.22 um 11:42 schrieb Matthias Heiserer: > The backups in the 'Backups' table in Storages are now initially > sorted by column 'Name' ascending. > > Previously, they were first sorted by 'vmid' descending, then by date > descending. This was unintuitive as 'vmid' doesn't exist as column > in the GUI, and only 'Date' displayed a sorting arrow. > > Signed-off-by: Matthias Heiserer > --- > www/manager6/storage/BackupView.js | 6 +- > 1 file changed, 1 insertion(+), 5 deletions(-) > Please note that the order was recently intentionally changed to be like that, see commit 58f4e6ac387561a16ec370812083d60a12dc4cfe That said, you do have a point. One way to improve the situation might be to add a vmid column, but we could also think about using a tree view for backups instead. @Thomas: Would the latter be okay for you? Not related to your change, but some more context: PVE.storage.BackupView is currently derived from PVE.storage.ContentView, but actually, it caused a lot of special handling to be added to that base class. If PVE.storage.BackupView were it's own thing (which is essentially implied if we go with the tree view approach), PVE.storage.ContentView should also get simpler again. There also is PVE.grid.BackupView which is used for backups of a single guest and IMHO it should be merged with the other one, with a few config options to account for the small differences in behavior. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH manager] storage GUI: fix unintuitive sorting order
On 21.02.22 09:32, Fabian Ebner wrote: > Am 18.02.22 um 11:42 schrieb Matthias Heiserer: >> The backups in the 'Backups' table in Storages are now initially >> sorted by column 'Name' ascending. >> >> Previously, they were first sorted by 'vmid' descending, then by date >> descending. This was unintuitive as 'vmid' doesn't exist as column >> in the GUI, and only 'Date' displayed a sorting arrow. >> >> Signed-off-by: Matthias Heiserer >> --- >> www/manager6/storage/BackupView.js | 6 +- >> 1 file changed, 1 insertion(+), 5 deletions(-) >> > Please note that the order was recently intentionally changed to be like > that, see commit 58f4e6ac387561a16ec370812083d60a12dc4cfe > > That said, you do have a point. One way to improve the situation might > be to add a vmid column, but we could also think about using a tree view > for backups instead. >> @Thomas: Would the latter be okay for you? A tree view like PBS has would be nice in general IMO for the storage backup content view, as with lots of VMs that's way easier to browse/work with than the flat list. IIRC we talked about that already in the past, in the context of the prune group button, but it wasn't to pressing then and went off the radar. Adding the vmid, at least as by-default hidden column, would be the easiest (quickest) change now though, but this is IMO not to pressing that we need to do any stop-gap measures. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-zsync 2/2] parse cron: handle additional whitespace
Can only happen by manually editing AFAICT, but cron does execute the
jobs, so just be a bit less restrictive when parsing.
Reported in the community forum:
https://forum.proxmox.com/threads/105254/
Signed-off-by: Fabian Ebner
---
pve-zsync | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/pve-zsync b/pve-zsync
index 7246336..f69e126 100755
--- a/pve-zsync
+++ b/pve-zsync
@@ -308,8 +308,7 @@ sub parse_cron {
my $cfg = {};
while (my $line = shift(@text)) {
-
- my @arg = split('\s', $line);
+ my @arg = split('\s+', $line);
my $param = parse_argv(@arg);
if ($param->{source} && $param->{dest}) {
--
2.30.2
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-zsync 1/2] rename encode_cron to parse_cron
The old name is confusing, because "encode" is usually not used when
converting from text to a data structure.
Signed-off-by: Fabian Ebner
---
pve-zsync | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/pve-zsync b/pve-zsync
index b466076..7246336 100755
--- a/pve-zsync
+++ b/pve-zsync
@@ -232,7 +232,7 @@ sub read_cron {
my $text = read_file($CRONJOBS, 0);
-return encode_cron(@{$text});
+return parse_cron(@{$text});
}
sub parse_argv {
@@ -302,7 +302,7 @@ sub add_state_to_job {
return $job;
}
-sub encode_cron {
+sub parse_cron {
my (@text) = @_;
my $cfg = {};
--
2.30.2
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 qemu-server 1/1] fix #3424: api: snapshot delete: wait for active replication
A to-be-deleted snapshot might be actively used by replication,
resulting in a not (or only partially) removed snapshot and locked
(snapshot-delete) VM. Simply wait a few seconds for any ongoing
replication.
Signed-off-by: Fabian Ebner
---
Dependency bump for guest-common needed.
New in v3.
PVE/API2/Qemu.pm | 11 ++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9be1caf..4fb05f7 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4552,11 +4552,20 @@ __PACKAGE__->register_method({
my $snapname = extract_param($param, 'snapname');
- my $realcmd = sub {
+ my $do_delete = sub {
PVE::Cluster::log_msg('info', $authuser, "delete snapshot VM $vmid:
$snapname");
PVE::QemuConfig->snapshot_delete($vmid, $snapname, $param->{force});
};
+ my $realcmd = sub {
+ if ($param->{force}) {
+ $do_delete->();
+ } else {
+ my $logfn = sub { print "$_[0]\n"; };
+ PVE::GuestHelpers::run_with_replication_guard($vmid, 10,
$logfn, $do_delete);
+ }
+ };
+
return $rpcenv->fork_worker('qmdelsnapshot', $vmid, $authuser,
$realcmd);
}});
--
2.30.2
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH-SERIES guest-common/container/qemu-server] fix #3424: wait for active replication when deleting a snapshot
Avoid that an attempt to remove a snapshot that's actively used by replication leads to a partially (or not) removed snapshot and locked guest. I decided to make the checks at the call sides, because passing the log function and timeout to snapshot_delete felt awkward as they would only be used for obtaining the lock. Changes from v2: * Also check upon manual snapshot removal, not just for vzdump. * Add common helper. guest-common: Fabian Ebner (1): guest helpers: add run_with_replication_guard src/PVE/GuestHelpers.pm | 15 ++- 1 file changed, 14 insertions(+), 1 deletion(-) container: Fabian Ebner (2): partially fix #3424: vzdump: cleanup: wait for active replication fix #3424: api: snapshot delete: wait for active replication src/PVE/API2/LXC/Snapshot.pm | 12 +++- src/PVE/VZDump/LXC.pm| 11 +-- 2 files changed, 20 insertions(+), 3 deletions(-) qemu-server: Fabian Ebner (1): fix #3424: api: snapshot delete: wait for active replication PVE/API2/Qemu.pm | 11 ++- 1 file changed, 10 insertions(+), 1 deletion(-) -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 container 1/2] partially fix #3424: vzdump: cleanup: wait for active replication
As replication and backup can happen at the same time, the vzdump
snapshot might be actively used by replication when backup tries
to cleanup, resulting in a not (or only partially) removed snapshot
and locked (snapshot-delete) container.
Wait up to 10 minutes for any ongoing replication. If replication
doesn't finish in time, the fact that there is no attempt to remove
the snapshot means that there's no risk for the container to end up in
a locked state. And the beginning of the next backup will force remove
the left-over snapshot, which will very likely succeed even at the
storage layer, because the replication really should be done by then
(subsequent replications shouldn't matter as they don't need to
re-transfer the vzdump snapshot).
Suggested-by: Fabian Grünbichler
Signed-off-by: Fabian Ebner
---
Dependency bump for guest-common needed.
Changes from v2:
* Use new helper.
VM backups are not affected by this, because they don't use
storage/config snapshots, but use pve-qemu's block layer.
src/PVE/VZDump/LXC.pm | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index b7f7463..2d943a1 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE/VZDump/LXC.pm
@@ -8,6 +8,7 @@ use File::Path;
use POSIX qw(strftime);
use PVE::Cluster qw(cfs_read_file);
+use PVE::GuestHelpers;
use PVE::INotify;
use PVE::LXC::Config;
use PVE::LXC;
@@ -476,8 +477,14 @@ sub cleanup {
}
if ($task->{cleanup}->{remove_snapshot}) {
- $self->loginfo("cleanup temporary 'vzdump' snapshot");
- PVE::LXC::Config->snapshot_delete($vmid, 'vzdump', 0);
+ my $do_delete = sub {
+ $self->loginfo("cleanup temporary 'vzdump' snapshot");
+ PVE::LXC::Config->snapshot_delete($vmid, 'vzdump', 0);
+ };
+ my $logfn = sub { $self->loginfo($_[0]); };
+
+ eval { PVE::GuestHelpers::run_with_replication_guard($vmid, 600,
$logfn, $do_delete); };
+ die "snapshot 'vzdump' was not (fully) removed - $@" if $@;
}
}
--
2.30.2
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 guest-common 1/1] guest helpers: add run_with_replication_guard
Signed-off-by: Fabian Ebner
---
New in v3.
src/PVE/GuestHelpers.pm | 15 ++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/src/PVE/GuestHelpers.pm b/src/PVE/GuestHelpers.pm
index 970c460..1183819 100644
--- a/src/PVE/GuestHelpers.pm
+++ b/src/PVE/GuestHelpers.pm
@@ -3,8 +3,9 @@ package PVE::GuestHelpers;
use strict;
use warnings;
-use PVE::Tools;
+use PVE::ReplicationConfig;
use PVE::Storage;
+use PVE::Tools;
use POSIX qw(strftime);
use Scalar::Util qw(weaken);
@@ -82,6 +83,18 @@ sub guest_migration_lock {
return $res;
}
+sub run_with_replication_guard {
+my ($vmid, $timeout, $log, $func, @param) = @_;
+
+my $repl_conf = PVE::ReplicationConfig->new();
+if ($repl_conf->check_for_existing_jobs($vmid, 1)) {
+ $log->("checking/waiting for active replication..") if $log;
+ guest_migration_lock($vmid, $timeout, $func, @param);
+} else {
+ $func->(@param);
+}
+}
+
sub check_hookscript {
my ($volid, $storecfg) = @_;
--
2.30.2
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 container 2/2] fix #3424: api: snapshot delete: wait for active replication
A to-be-deleted snapshot might be actively used by replication,
resulting in a not (or only partially) removed snapshot and locked
(snapshot-delete) container. Simply wait a few seconds for any ongoing
replication.
Signed-off-by: Fabian Ebner
---
New in v3.
src/PVE/API2/LXC/Snapshot.pm | 12 +++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/src/PVE/API2/LXC/Snapshot.pm b/src/PVE/API2/LXC/Snapshot.pm
index 160e5eb..8009586 100644
--- a/src/PVE/API2/LXC/Snapshot.pm
+++ b/src/PVE/API2/LXC/Snapshot.pm
@@ -10,6 +10,7 @@ use PVE::INotify;
use PVE::Cluster qw(cfs_read_file);
use PVE::AccessControl;
use PVE::Firewall;
+use PVE::GuestHelpers;
use PVE::Storage;
use PVE::RESTHandler;
use PVE::RPCEnvironment;
@@ -198,11 +199,20 @@ __PACKAGE__->register_method({
my $snapname = extract_param($param, 'snapname');
- my $realcmd = sub {
+ my $do_delete = sub {
PVE::Cluster::log_msg('info', $authuser, "delete snapshot VM $vmid:
$snapname");
PVE::LXC::Config->snapshot_delete($vmid, $snapname,
$param->{force});
};
+ my $realcmd = sub {
+ if ($param->{force}) {
+ $do_delete->();
+ } else {
+ my $logfn = sub { print "$_[0]\n"; };
+ PVE::GuestHelpers::run_with_replication_guard($vmid, 10,
$logfn, $do_delete);
+ }
+ };
+
return $rpcenv->fork_worker('vzdelsnapshot', $vmid, $authuser,
$realcmd);
}});
--
2.30.2
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH pve-zsync 1/2] rename encode_cron to parse_cron
On 21.02.22 10:07, Fabian Ebner wrote: > The old name is confusing, because "encode" is usually not used when > converting from text to a data structure. > > Signed-off-by: Fabian Ebner > --- > pve-zsync | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > applied, thanks! ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-zsync 2/2] parse cron: handle additional whitespace
On 21.02.22 10:07, Fabian Ebner wrote:
> Can only happen by manually editing AFAICT, but cron does execute the
> jobs, so just be a bit less restrictive when parsing.
I mean, it won't get more broken as already but can we avoid white-space
splitting
on command arguments in general?
We could use the same underlying helper as PVE::Tools::split_args uses,
Text::ParseWords::shellwords (included in perl-modules directly)..
>
> Reported in the community forum:
> https://forum.proxmox.com/threads/105254/
>
> Signed-off-by: Fabian Ebner
> ---
> pve-zsync | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/pve-zsync b/pve-zsync
> index 7246336..f69e126 100755
> --- a/pve-zsync
> +++ b/pve-zsync
> @@ -308,8 +308,7 @@ sub parse_cron {
> my $cfg = {};
>
> while (my $line = shift(@text)) {
> -
> - my @arg = split('\s', $line);
> + my @arg = split('\s+', $line);
> my $param = parse_argv(@arg);
>
> if ($param->{source} && $param->{dest}) {
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH v3 storage 1/2] fix #3894: cast 'size' and 'used' to integer
On 18.02.22 09:58, Mira Limbeck wrote: > Perl's automatic conversion can lead to integers being converted to > strings, for example by matching it in a regex. > > To make sure we always return an integer in the API call, add an > explicit cast to integer. > > Signed-off-by: Mira Limbeck > Reviewed-by: Fabian Ebner > --- > v3: > - fixed style nits > - added R-b tag > v2: > - new > > PVE/API2/Storage/Content.pm | 8 +--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > applied, thanks! ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH storage v3 2/2] file_size_info: cast 'size' and 'used' to integer
On 18.02.22 09:58, Mira Limbeck wrote: > `qemu-img info --output=json` returns the size and used values as integers in > the JSON format, but the regex match converts them to strings. > As we know they only contain digits, we can simply cast them back to integers > after the regex. > > The API requires them to be integers. > > Signed-off-by: Mira Limbeck > Reviewed-by: Fabian Ebner > --- > v3: > - changed comment to a short one > - added R-b tag > v2: > - reworded commit subject and message > > PVE/Storage/Plugin.pm | 4 > 1 file changed, 4 insertions(+) > > applied, thanks! ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH librados2-perl 1/6] mon_command: free outs buffer
On 18.02.22 12:38, Aaron Lauterer wrote:
> Signed-off-by: Aaron Lauterer
> ---
>
> thanks @Dominik who realized that we did not free this buffer in all
> situations.
>
note that the status string is normally only allocated in the error case,
where we freed it already, so actual impact shouldn't be that big; still
definitively more correct this way ;)
> RADOS.xs | 4
> 1 file changed, 4 insertions(+)
>
> diff --git a/RADOS.xs b/RADOS.xs
> index 7eca024..1eb0b5a 100644
> --- a/RADOS.xs
> +++ b/RADOS.xs
> @@ -145,6 +145,10 @@ CODE:
> RETVAL = newSVpv(outbuf, outbuflen);
>
> rados_buffer_free(outbuf);
> +
> +if (outs != NULL) {
fyi: I made all calls to rados_buffer_free unconditional as the code there
checks already for null-ness (not that it'd matter much, free(NULL) is a no-op)
and it makes librados' built-in tracing more complete.
> + rados_buffer_free(outs);
fyi: above had a tab in a file that uses space only indentation.
> +}
> }
> OUTPUT: RETVAL
>
applied, thanks!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH librados2-perl 2/6] mon_command: optionally ignore errors and return hashmap
On 18.02.22 12:38, Aaron Lauterer wrote:
> In some situations, we do not want to abort if the Ceph API returns an
> error (ret != 0). We also want to be able to access all the retured
> values from the Ceph API (return code, status, data).
>
> One such situation can be the 'osd ok-to-stop' call where Ceph will
> return a non zero code if it is not okay to stop the OSD, but will also
> have useful information in the status buffer that we can use to show the
> user.
>
> For this, let's always return a hashmap with the return code, the status
> message and the returned data from RADOS.xs::mon_command. Then decide on
> the Perl side (RADOS.pm::mon_command) if we return the scalar data as we
> used to, or the contents of the hashmap in an array.
>
> The new parameter 'noerr' is used to indicate that we want to proceed on
> non-zero return values. Omitting it gives us the old behavior to die
> with the status message.
>
> The new parameter needs to be passed to the child process, which causes
> some changes there and the resulting hashmaps gets JSON encoded to be
> passed back up to the parent process.
should be avoidable, the child can always pass through the new info and
the actual perl code can do the filtering.
>
> Signed-off-by: Aaron Lauterer
> ---
> This patch requires patch 3 of the series to not break OSD removal!
> Therefore releasing a new version of librados2-perl and pve-manager
> needs to be coordinated.
I don't like that and think it can be avoided.
>
> Please have a closer look at the C code that I wrote in RADOS.xs as I
> have never written much C and am not sure if I introduced some nasty bug
> / security issue.
>
> PVE/RADOS.pm | 37 ++---
> RADOS.xs | 26 --
> 2 files changed, 42 insertions(+), 21 deletions(-)
>
> @@ -259,19 +259,26 @@ sub cluster_stat {
> # example1: { prefix => 'get_command_descriptions'})
> # example2: { prefix => 'mon dump', format => 'json' }
> sub mon_command {
> -my ($self, $cmd) = @_;
> +my ($self, $cmd, $noerr) = @_;
> +
> +$noerr = 0 if !$noerr;
>
> $cmd->{format} = 'json' if !$cmd->{format};
>
> my $json = encode_json($cmd);
>
> -my $raw = eval { $sendcmd->($self, 'M', $json) };
> +my $ret = eval { $sendcmd->($self, 'M', $json, undef, $noerr) };
I'd rather like to avoid chaining through that $noerr every where, rather pass
all the
info via the die (errors can be references to structured data too, like
PVE::Exception is),
or just avoid the die at the lower level completely and map the error inside
the struct
too, it can then be thrown here, depending on $noerr parameters or what not.
> die "error with '$cmd->{prefix}': $@" if $@;
>
> +my $raw = decode_json($ret);
> +
> +my $data = '';
> if ($cmd->{format} && $cmd->{format} eq 'json') {
> - return length($raw) ? decode_json($raw) : undef;
> + $data = length($raw->{data}) ? decode_json($raw->{data}) : undef;
> +} else {
> + $data = $raw->{data};
> }
> -return $raw;
> +return wantarray ? ($raw->{code}, $raw->{status}, $data) : $data;
> }
>
>
> diff --git a/RADOS.xs b/RADOS.xs
> index 1eb0b5a..3d828e1 100644
> --- a/RADOS.xs
> +++ b/RADOS.xs
> @@ -98,11 +98,12 @@ CODE:
> rados_shutdown(cluster);
> }
>
> -SV *
> -pve_rados_mon_command(cluster, cmds)
> +HV *
> +pve_rados_mon_command(cluster, cmds, noerr=false)
> rados_t cluster
> AV *cmds
> -PROTOTYPE: $$
> +bool noerr
> +PROTOTYPE: $$;$
> CODE:
> {
> const char *cmd[64];
> @@ -129,7 +130,7 @@ CODE:
> &outbuf, &outbuflen,
> &outs, &outslen);
>
> -if (ret < 0) {
> +if (ret < 0 && noerr == false) {
> char msg[4096];
> if (outslen > sizeof(msg)) {
> outslen = sizeof(msg);
> @@ -142,9 +143,22 @@ CODE:
> die(msg);
> }
>
> -RETVAL = newSVpv(outbuf, outbuflen);
> +char status[(int)outslen + 1];
this is on the stack and could be to large to guarantee it always fits, but...
> +if (outslen > sizeof(status)) {
> + outslen = sizeof(status);
> +}
> +snprintf(status, sizeof(status), "%.*s\n", (int)outslen, outs);
...why not just chain outs through instead of re-allocating it and writing it
out in a
relatively expensive way?
> +
> +HV * rh = (HV *)sv_2mortal((SV *)newHV());
> +
> +(void)hv_store(rh, "code", 4, newSViv(ret), 0);
> +(void)hv_store(rh, "data", 4, newSVpv(outbuf, outbuflen), 0);
> +(void)hv_store(rh, "status", 6, newSVpv(status, sizeof(status) - 1), 0);
> +RETVAL = rh;
>
> -rados_buffer_free(outbuf);
> +if (outbuf != NULL) {
> + rados_buffer_free(outbuf);
> +}
>
> if (outs != NULL) {
> rados_buffer_free(outs);
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v2 manager 1/3] ui: lxc/qemu: add disk reassign
sorry for the late review
some comments inline
On 11/15/21 16:02, Aaron Lauterer wrote:
For the new HDReassign component, we follow the approach of HDMove to
have one componend for qemu and lxc as they are the same for most parts.
The 'Move disk/volume' button is now a SplitButton which has the
'Reassign' button as menu item. In the lxc resource panel the menu item
is defined extra so we can easily disable it, should the selected mp be
the rootfs.
In the lxc resource and qemu hardware panel we currently add a new
button to handle unused disks/volumes. The button is "switched" with the
'Move' in this case. The width of the buttons is aligned to avoid
movement of other buttons.
Once we are able to also move unused disks/volumes to other storages, we
can remove this.
Signed-off-by: Aaron Lauterer
---
Not all cbind values can be omitted AFAICT as we do not have access to
the component context when declaring our items. From what I know, we can
use regular bind, cbind (if we only need to set the value once) or set
the values manually in the initComponent.
Triggering the validation of the mountpoint integerfield when the mount
point type changes (onMpTypeChange) is happening because it is not done
so automatically but necessary as the other MP type (e.g. unused) could
already be in use with that number. There might be a better way that I
am not aware of.
changes since v1: incorporated feedback I got off list
* use more modern approaches
* arrow functions
* autoShow
* template strings
* reduce predefined cbind values and use arrow functions in the cbind
directly in many cases
* some code style issues and cleanup
www/manager6/Makefile | 1 +
www/manager6/lxc/Resources.js | 62 +-
www/manager6/qemu/HDReassign.js | 316 ++
www/manager6/qemu/HardwareView.js | 57 +-
4 files changed, 432 insertions(+), 4 deletions(-)
create mode 100644 www/manager6/qemu/HDReassign.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index e6e01bd1..94a78d89 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -214,6 +214,7 @@ JSSRC=
\
qemu/HDTPM.js \
qemu/HDMove.js \
qemu/HDResize.js\
+ qemu/HDReassign.js \
qemu/HardwareView.js\
qemu/IPConfigEdit.js\
qemu/KeyboardEdit.js\
diff --git a/www/manager6/lxc/Resources.js b/www/manager6/lxc/Resources.js
index 15ee3c67..bec7cf14 100644
--- a/www/manager6/lxc/Resources.js
+++ b/www/manager6/lxc/Resources.js
@@ -156,6 +156,11 @@ Ext.define('PVE.lxc.RessourceView', {
return;
}
+ if (rec.data.key.match(/^unused/)) {
+ Ext.Msg.alert('Error', gettext('Not yet supported for unused
volumes'));
+ return;
+ }
+
since we already hide/disable that button accordingly, why have that
error message at all? if the user somehow shows the button via the
browser console, the api will return an error anyway..
that way, we'd save a gettext
var win = Ext.create('PVE.window.HDMove', {
disk: rec.data.key,
nodename: nodename,
@@ -168,6 +173,24 @@ Ext.define('PVE.lxc.RessourceView', {
win.on('destroy', me.reload, me);
};
+ let run_reassign = function() {
+ let rec = me.selModel.getSelection()[0];
+ if (!rec) {
+ return;
+ }
+
+ Ext.create('PVE.window.HDReassign', {
+ disk: rec.data.key,
+ nodename: nodename,
+ autoShow: true,
+ vmid: vmid,
+ type: 'lxc',
+ listeners: {
+ destroy: () => me.reload(),
+ },
+ });
+ };
+
var edit_btn = new Proxmox.button.Button({
text: gettext('Edit'),
selModel: me.selModel,
@@ -227,12 +250,40 @@ Ext.define('PVE.lxc.RessourceView', {
},
});
- var move_btn = new Proxmox.button.Button({
+ let reassign_menuitem = new Ext.menu.Item({
+ text: gettext('Reassign volume'),
+ tooltip: gettext('Reassign volume to another VM'),
+ handler: run_reassign,
+ iconCls: 'fa fa-mail-forward',
+ reference: 'reassing_item',
+ });
+
+ let move_btn = new PVE.button.Split({
text: gettext('Move Volume'),
selModel: me.selModel,
disabled: true,
dangerous: true,
handler: run_move,
+ menu: {
+ items: [reassign_menuitem],
+ },
+ });
+
+ // needed until we can move unused volumes to other storages
+ let reassign_btn = new Proxmox.
