[pve-devel] applied: [PATCH common v2 1/1] JSONSchema: add pve-tag format

2019-10-30 Thread Thomas Lamprecht
On 10/3/19 1:50 PM, Dominik Csapak wrote:
> this will be used for vm/ct tag-lists, so that (config) management systems
> or similar add additional information that does not reside in the
> description
> 
> putting it here, since we want to eventually have it also for
> nodes,storages,etc.
> 
> Signed-off-by: Dominik Csapak 
> ---
>  src/PVE/JSONSchema.pm | 12 
>  1 file changed, 12 insertions(+)
> 
> diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
> index db38d44..8332a19 100644
> --- a/src/PVE/JSONSchema.pm
> +++ b/src/PVE/JSONSchema.pm
> @@ -499,6 +499,18 @@ register_standard_option('bwlimit', {
>  format => $bwlimit_format,
>  });
>  
> +# used for pve-tag-list in e.g., guest configs
> +register_format('pve-tag', \&pve_verify_tag);
> +sub pve_verify_tag {
> +my ($value, $noerr) = @_;
> +
> +return $value if $value =~ m/^[a-z0-9_][a-z0-9_\-\+\.]*$/i;
> +
> +return undef if $noerr;
> +
> +die "invalid characters in tag\n";
> +}
> +
>  sub pve_parse_startup_order {
>  my ($value) = @_;
>  
> 

applied, albeit it would be great if I remembered this yesterday,
then we would had it in the package and version-dependency bump
included "for free"..

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager v2 1/5] gui: add tag related helpers

2019-10-30 Thread Thomas Lamprecht
On 10/3/19 1:50 PM, Dominik Csapak wrote:
> helpers to
> * generate a color from a string consistently
> * generate a html tag for a tag
> * related css classes
> 
> Signed-off-by: Dominik Csapak 
> ---
>  www/css/ext6-pve.css  | 13 +
>  www/manager6/Utils.js | 34 ++
>  2 files changed, 47 insertions(+)
> 

can we have those helpers in widget toolkit? I could imagine that
their useful for re-use in the future, and they are general enough.

> diff --git a/www/css/ext6-pve.css b/www/css/ext6-pve.css
> index 535f8e60..3d90de59 100644
> --- a/www/css/ext6-pve.css
> +++ b/www/css/ext6-pve.css
> @@ -631,3 +631,16 @@ table.osds td:first-of-type {
>  background-color: rgb(245, 245, 245);
>  color: #000;
>  }
> +
> +.tag {
> +border-radius: 5px;
> +padding: 2px 4px;
> +}
> +
> +.tag-light {
> +color: #fff;
> +}
> +
> +.tag-dark {
> +color: #000;
> +}
> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
> index 6a489e7e..e9b7b75b 100644
> --- a/www/manager6/Utils.js
> +++ b/www/manager6/Utils.js
> @@ -1240,6 +1240,40 @@ Ext.define('PVE.Utils', { utilities: {
>   } else {
>   return false;
>   }
> +},
> +
> +stringToRGB: function(string) {
> + let hash = 0;
> + if (!string) {
> + return hash;
> + }
> + string += 'prox'; // give short strings more variance
> + for (let i = 0; i < string.length; i++) {
> + hash = string.charCodeAt(i) + ((hash << 5) - hash);
> + hash = hash & hash; // to int
> + }
> + return [
> + hash & 255,
> + (hash >> 8) & 255,
> + (hash >> 16) & 255,
> + 0.6 // make the colors a little lighter
> + ];
> +},
> +
> +rgbToCss: function(rgb) {
> + return `rgb(${rgb[0]}, ${rgb[1]}, ${rgb[2]}, ${rgb[3]})`;
> +},
> +
> +rgbToLuminance: function(rgb) {
> + // https://en.wikipedia.org/wiki/Relative_luminance
> + return (0.2126 * rgb[0] + 0.7152*rgb[1] + 0.0722*rgb[2])/rgb[3];
> +},
> +
> +getTagElement: function(string) {
> + let rgb = PVE.Utils.stringToRGB(string);
> + let bgcolor = PVE.Utils.rgbToCss(rgb);
> + let txtCls = PVE.Utils.rgbToLuminance(rgb) < 160 ? 'light' : 'dark';
> + return `${string}`;
>  }
>  },
>  
> 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager v2 4/5] gui: add tag edit windows for guests

2019-10-30 Thread Thomas Lamprecht
On 10/3/19 1:50 PM, Dominik Csapak wrote:
> so that the user can edit the tags in the gui

IMO the options is not really "correct", it's not a guest option
at all. Rather this fits to the "Notes" field of the Summary panel,
maybe finding there a place would be better..

> 
> Signed-off-by: Dominik Csapak 
> ---
>  www/manager6/lxc/Options.js  | 13 +
>  www/manager6/qemu/Options.js | 13 +
>  2 files changed, 26 insertions(+)
> 
> diff --git a/www/manager6/lxc/Options.js b/www/manager6/lxc/Options.js
> index 23409938..049b35e3 100644
> --- a/www/manager6/lxc/Options.js
> +++ b/www/manager6/lxc/Options.js
> @@ -141,6 +141,19 @@ Ext.define('PVE.lxc.Options', {
>   editor: Proxmox.UserName === 'root@pam' ?
>   'PVE.lxc.FeaturesEdit' : undefined
>   },
> + tags: {
> + header: gettext('Tags'),
> + defaultValue: false,
> + renderer: val => val || Proxmox.Utils.noneText,
> + editor: caps.vms['VM.Config.Options'] ? {
> + xtype: 'proxmoxWindowEdit',
> + subject: gettext('Tags'),
> + items: {
> + xtype: 'pveTagSelector',
> + name: 'tags',
> + }
> + } : undefined
> + },
>   hookscript: {
>   header: gettext('Hookscript')
>   }
> diff --git a/www/manager6/qemu/Options.js b/www/manager6/qemu/Options.js
> index e1580060..e17d8576 100644
> --- a/www/manager6/qemu/Options.js
> +++ b/www/manager6/qemu/Options.js
> @@ -281,6 +281,19 @@ Ext.define('PVE.qemu.Options', {
>   }
>   } : undefined
>   },
> + tags: {
> + header: gettext('Tags'),
> + defaultValue: false,
> + renderer: val => val || Proxmox.Utils.noneText,
> + editor: caps.vms['VM.Config.Options'] ? {
> + xtype: 'proxmoxWindowEdit',
> + subject: gettext('Tags'),
> + items: {
> + xtype: 'pveTagSelector',
> + name: 'tags',
> + }
> + } : undefined
> + },
>   hookscript: {
>   header: gettext('Hookscript')
>   }
> 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager v2 5/5] gui: remove chrome/extjs workaround

2019-10-30 Thread Thomas Lamprecht
On 10/3/19 1:50 PM, Dominik Csapak wrote:
> it seems that this is not needed anymore, at least i cannot
> see any difference with/without it here (chromium 76)
> 
> Signed-off-by: Dominik Csapak 
> ---
> this is necessary for the TagSelector to be shown properly in
> chrome/chromium, otherwise it is only a single line
> 
> if anyone tests this and sees an error in rendering, please tell me
> so i can add a workaround for the tagselector
> 

mainly to ensure you get the feedback you asked for: applied ;-)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] aplinfo: see trusted keys as build product, always assembly

2019-10-30 Thread Thomas Lamprecht
On 10/16/19 11:05 AM, Thomas Lamprecht wrote:
> Don't track the binary trustedkeys.gpg but see it just as normal
> build product with the armored keys as source.
> 
> This ensures we always ship those from TRUSTED_KEYS variable, not
> more, not less.
> 
> Instead of the "gpg import+export in temporary home dir" just
> de-armor and concatenate them our self, that's what happens anyway.
> 
> This could be even simplified by just using base64 -d on the pubkeys,
> after the non base64 stuff was trimmed, that would omit our need for
> gpg here completely.
> 
> Thanks to Wolfgang B. for giving the idea to just do simple stuff :)
> 
> Signed-off-by: Thomas Lamprecht 
> ---
>  aplinfo/Makefile|  26 ++
>  aplinfo/trustedkeys.gpg | Bin 3602 -> 0 bytes
>  2 files changed, 6 insertions(+), 20 deletions(-)
>  delete mode 100644 aplinfo/trustedkeys.gpg
> 

due to no objections, and the reason that forgetting this could have
not-so-ideal effects, applied. Just sent a patch if you think that something
is still off ;-)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 1/3] Update unused volumes in config when doing

2019-10-30 Thread Fabian Ebner

On 10/29/19 7:28 PM, Thomas Lamprecht wrote:

On 10/28/19 10:57 AM, Fabian Ebner wrote:

When doing an online migration with --targetstorage unused disks get migrated
to the specified target storage as well.
With this patch we keep track of those volumes and update the VM config with
their new locations. Unused volumes of the VM previously not present in the
config are added as well.

Signed-off-by: Fabian Ebner 
---
  PVE/QemuMigrate.pm | 16 
  1 file changed, 16 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 65f39b6..0e9fdcf 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -465,6 +465,12 @@ sub sync_disks {
} else {
next if $rep_volumes->{$volid};
push @{$self->{volumes}}, $volid;
+
+   if (defined($override_targetsid)) {
+   my (undef, $targetvolname) = 
PVE::Storage::parse_volume_id($volid);
+   push @{$self->{online_unused_volumes}}, 
"${targetsid}:${targetvolname}";

sorry, but where do you check if this is a unused volume,
vs. a used one?

I mean here land all local volumes which are either !($self->{running} && $ref 
eq 'config')
or $ref eq 'generated', or do I oversee something?

The check is implicit in the check for $override_targetsid,
which can only be defined if $self->{running}.
And if snapshots exist online migration is also not possible.
So the only possibilities are $ref eq 'storage' and undef
and I assumed that undef also means it's an unused volume.

I should've at least mentioned this and checking explicitly is
of course better and stable against future changes.
I'll send a v2, thanks!


+   }
+
my $opts = $self->{opts};
my $insecure = $opts->{migration_type} eq 'insecure';
my $with_snapshots = $local_volumes->{$volid}->{snapshots};
@@ -958,6 +964,16 @@ sub phase3_cleanup {
}
  }
  
+if ($self->{online_unused_volumes}) {

+   foreach my $conf_key (keys %{$conf}) {
+   delete $conf->{$conf_key} if ($conf_key =~ m/^unused\d+$/);
+   }
+   foreach my $targetvolid (@{$self->{online_unused_volumes}}) {
+   PVE::QemuConfig->add_unused_volume($conf, $targetvolid);
+   }
+   PVE::QemuConfig->write_config($vmid, $conf);
+}
+
  # transfer replication state before move config
  $self->transfer_replication_state() if $self->{replicated_volumes};
  




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] qemu: update button status without delay

2019-10-30 Thread Thomas Lamprecht
On 10/29/19 3:50 PM, Oguz Bektas wrote:
> as we noticed at the lxc side, we should use diffStore in order to
> update the button status without delay.
> 
> Co-developed-by: Dominik Csapak 
> Signed-off-by: Oguz Bektas 
> ---
>  www/manager6/qemu/HardwareView.js | 4 ++--
>  www/manager6/qemu/Options.js  | 2 +-
>  2 files changed, 3 insertions(+), 3 deletions(-)
> 

applied, with slightly changed/enhanced commit message, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager v2 4/5] gui: add tag edit windows for guests

2019-10-30 Thread Dominik Csapak

On 10/30/19 8:51 AM, Thomas Lamprecht wrote:

On 10/3/19 1:50 PM, Dominik Csapak wrote:

so that the user can edit the tags in the gui


IMO the options is not really "correct", it's not a guest option
at all. Rather this fits to the "Notes" field of the Summary panel,
maybe finding there a place would be better..


i see what you mean, but i could not figure out where else to put it...
options also include meta settings like 'bootorder','name' or 
'protected' which do not really influence the guest either (the name 
only with cloudinit)


but yeah i try to find a place near the notes





Signed-off-by: Dominik Csapak 
---
  www/manager6/lxc/Options.js  | 13 +
  www/manager6/qemu/Options.js | 13 +
  2 files changed, 26 insertions(+)

diff --git a/www/manager6/lxc/Options.js b/www/manager6/lxc/Options.js
index 23409938..049b35e3 100644
--- a/www/manager6/lxc/Options.js
+++ b/www/manager6/lxc/Options.js
@@ -141,6 +141,19 @@ Ext.define('PVE.lxc.Options', {
editor: Proxmox.UserName === 'root@pam' ?
'PVE.lxc.FeaturesEdit' : undefined
},
+   tags: {
+   header: gettext('Tags'),
+   defaultValue: false,
+   renderer: val => val || Proxmox.Utils.noneText,
+   editor: caps.vms['VM.Config.Options'] ? {
+   xtype: 'proxmoxWindowEdit',
+   subject: gettext('Tags'),
+   items: {
+   xtype: 'pveTagSelector',
+   name: 'tags',
+   }
+   } : undefined
+   },
hookscript: {
header: gettext('Hookscript')
}
diff --git a/www/manager6/qemu/Options.js b/www/manager6/qemu/Options.js
index e1580060..e17d8576 100644
--- a/www/manager6/qemu/Options.js
+++ b/www/manager6/qemu/Options.js
@@ -281,6 +281,19 @@ Ext.define('PVE.qemu.Options', {
}
} : undefined
},
+   tags: {
+   header: gettext('Tags'),
+   defaultValue: false,
+   renderer: val => val || Proxmox.Utils.noneText,
+   editor: caps.vms['VM.Config.Options'] ? {
+   xtype: 'proxmoxWindowEdit',
+   subject: gettext('Tags'),
+   items: {
+   xtype: 'pveTagSelector',
+   name: 'tags',
+   }
+   } : undefined
+   },
hookscript: {
header: gettext('Hookscript')
}






___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager v2 4/5] gui: add tag edit windows for guests

2019-10-30 Thread Thomas Lamprecht
On 10/30/19 9:23 AM, Dominik Csapak wrote:
> On 10/30/19 8:51 AM, Thomas Lamprecht wrote:
>> On 10/3/19 1:50 PM, Dominik Csapak wrote:
>>> so that the user can edit the tags in the gui
>>
>> IMO the options is not really "correct", it's not a guest option
>> at all. Rather this fits to the "Notes" field of the Summary panel,
>> maybe finding there a place would be better..
> 
> i see what you mean, but i could not figure out where else to put it...
> options also include meta settings like 'bootorder','name' or 'protected' 
> which do not really influence the guest either (the name only with cloudinit)

That's not true, the all influence the guest!
name -> CT hostname
bootorder -> actively influences when a CT is started
  (or if at all autostarted)
protected -> actively prevents CT destruction

But tags is, as your other patch says, really just meta-information
just like the description/notes, so IMO this is really different than
the other things currently in options.




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager v2 4/5] gui: add tag edit windows for guests

2019-10-30 Thread Dominik Csapak

On 10/30/19 9:30 AM, Thomas Lamprecht wrote:

On 10/30/19 9:23 AM, Dominik Csapak wrote:

On 10/30/19 8:51 AM, Thomas Lamprecht wrote:

On 10/3/19 1:50 PM, Dominik Csapak wrote:

so that the user can edit the tags in the gui


IMO the options is not really "correct", it's not a guest option
at all. Rather this fits to the "Notes" field of the Summary panel,
maybe finding there a place would be better..


i see what you mean, but i could not figure out where else to put it...
options also include meta settings like 'bootorder','name' or 'protected' which 
do not really influence the guest either (the name only with cloudinit)


That's not true, the all influence the guest!
name -> CT hostname
bootorder -> actively influences when a CT is started
   (or if at all autostarted)
protected -> actively prevents CT destruction

But tags is, as your other patch says, really just meta-information
just like the description/notes, so IMO this is really different than
the other things currently in options.



ok, yeah i see what you mean now,
i thought you meant influence the guest as in 'inside'

but yeah you're right

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH v3 manager] gui: add revert button for lxc pending changes

2019-10-30 Thread Dominik Csapak

applied, thanks :)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] fix #2390: use fixed order for cloudinits net config

2019-10-30 Thread Stefan Reiter

On 10/30/19 7:45 AM, Thomas Lamprecht wrote:

On 10/22/19 2:48 PM, Dominik Csapak wrote:

otherwise, having multiple ipconfigX entries, can lead to different
instance-ids on different startups, which is not desired

Signed-off-by: Dominik Csapak 
---
2 issues i have with this:
* we have a cyclic dependency between PVE::QemuServer and
   PVE::QemuServer::Cloudinit, and this patch increases that
   we could fix this by refactoring all relevant code out of
   QemuServer.pm, but this seems like much work, since the that code
   depends on many parts of QemuServer, not sure how to proceed here...


Stefan, the move to QemuSchema for such things could avoid that,
or? Maybe we want to defer that change until that went through?



PVE::QemuServer::Cloudinit has a lot of different references to 
PVE::QemuServer, none of which are touched by my changes AFAICT (I see 
network stuff, drive stuff, "windows_version", etc...)


It's certainly possible (and would probably a good idea) to move all 
relevant code to different modules as well, but my current series 
doesn't touch that.



* i am not sure if using a getter for MAX_NETS is the right way here,
   or if we should use constants (or something else?)...


works good for me, a constant or "our" scoped variable isn't inherently
better, IMO.

So in general, OK, but I'll wait what Stefan thinks regarding the
cyclic dependency.



I'd say apply it now, when we (I?) get to improving that part of the 
code, one reference to "get_max_nets" isn't going to make the difference.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] fix #2390: use fixed order for cloudinits net config

2019-10-30 Thread Thomas Lamprecht
On 10/30/19 10:08 AM, Stefan Reiter wrote:
> On 10/30/19 7:45 AM, Thomas Lamprecht wrote:
>> On 10/22/19 2:48 PM, Dominik Csapak wrote:
>>> otherwise, having multiple ipconfigX entries, can lead to different
>>> instance-ids on different startups, which is not desired
>>>
>>> Signed-off-by: Dominik Csapak 
>>> ---
>>> 2 issues i have with this:
>>> * we have a cyclic dependency between PVE::QemuServer and
>>>    PVE::QemuServer::Cloudinit, and this patch increases that
>>>    we could fix this by refactoring all relevant code out of
>>>    QemuServer.pm, but this seems like much work, since the that code
>>>    depends on many parts of QemuServer, not sure how to proceed here...
>>
>> Stefan, the move to QemuSchema for such things could avoid that,
>> or? Maybe we want to defer that change until that went through?
>>
> 
> PVE::QemuServer::Cloudinit has a lot of different references to 
> PVE::QemuServer, none of which are touched by my changes AFAICT (I see 
> network stuff, drive stuff, "windows_version", etc...)
> 
> It's certainly possible (and would probably a good idea) to move all relevant 
> code to different modules as well, but my current series doesn't touch that.

No, i did not meant that at all. I meant if the QEMU MAX_NET limit
would/could/should be moved to QemuSchema, so that this increase of
tightening the cyclic dependency could be avoided?

> 
>>> * i am not sure if using a getter for MAX_NETS is the right way here,
>>>    or if we should use constants (or something else?)...
>>
>> works good for me, a constant or "our" scoped variable isn't inherently
>> better, IMO.
>>
>> So in general, OK, but I'll wait what Stefan thinks regarding the
>> cyclic dependency.
>>
> 
> I'd say apply it now, when we (I?) get to improving that part of the code, 
> one reference to "get_max_nets" isn't going to make the difference.

It's always just one here, and one there :D I'll see, thanks!


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] QMPClient: add destructor

2019-10-30 Thread Wolfgang Bumiller
Explicitly close leftover connections in the destructor,
otherwise the IO::Multiplex instance can be leaked causing
the qmp connection to never be closed.

This could occur for instance when cancelling vzdump with
ctrl+c with extremely unlucky timing...

Signed-off-by: Wolfgang Bumiller 
---
 PVE/QMPClient.pm | 9 +
 1 file changed, 9 insertions(+)

diff --git a/PVE/QMPClient.pm b/PVE/QMPClient.pm
index 7d75663..96b6a24 100644
--- a/PVE/QMPClient.pm
+++ b/PVE/QMPClient.pm
@@ -509,4 +509,13 @@ sub mux_eof {
 }
 }
 
+sub DESTROY {
+my ($self) = @_;
+
+foreach my $sname (keys %{$self->{queue_info}}) {
+   my $queue_info = $self->{queue_info}->{$sname};
+   $close_connection->($self, $queue_info);
+}
+}
+
 1;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server] QMPClient: add destructor

2019-10-30 Thread Thomas Lamprecht
On 10/30/19 10:28 AM, Wolfgang Bumiller wrote:
> Explicitly close leftover connections in the destructor,
> otherwise the IO::Multiplex instance can be leaked causing
> the qmp connection to never be closed.
> 
> This could occur for instance when cancelling vzdump with
> ctrl+c with extremely unlucky timing...
> 

applied, but..

> Signed-off-by: Wolfgang Bumiller 
> ---
>  PVE/QMPClient.pm | 9 +
>  1 file changed, 9 insertions(+)
> 
> diff --git a/PVE/QMPClient.pm b/PVE/QMPClient.pm
> index 7d75663..96b6a24 100644
> --- a/PVE/QMPClient.pm
> +++ b/PVE/QMPClient.pm
> @@ -509,4 +509,13 @@ sub mux_eof {
>  }
>  }
>  
> +sub DESTROY {
> +my ($self) = @_;
> +
> +foreach my $sname (keys %{$self->{queue_info}}) {
> + my $queue_info = $self->{queue_info}->{$sname};
> + $close_connection->($self, $queue_info);
> +}

for my $queue_info (values %{$self->{queue_info}}) {
$close_connection->($self, $queue_info);
}
should have done the trick too ;) or 
$close_connection->($self, $_) for values %{$self->{queue_info}};
but maybe your perl is just a little bit rust-y ;-P

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server 3/3] Fix typo

2019-10-30 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 PVE/API2/Qemu.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b2c0b0d..a3992c4 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3361,7 +3361,7 @@ __PACKAGE__->register_method({
if (PVE::QemuServer::check_running($vmid)) {
die "can't migrate running VM without --online\n" if 
!$param->{online};
} else {
-   warn "VM isn't running. Doing offline migration instead\n." if 
$param->{online};
+   warn "VM isn't running. Doing offline migration instead.\n" if 
$param->{online};
$param->{online} = 0;
}
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server 2/3] Avoid collisions of unused disks when doing online migration with --targetstorage

2019-10-30 Thread Fabian Ebner
Doing an online migration with --targetstorage and two unused disks with the
same name on different storages failed, because they would collide on the
target storage. This patch makes sure that we don't use the same name twice.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 045f3b0..94c5764 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -10,6 +10,7 @@ use PVE::INotify;
 use PVE::Tools;
 use PVE::Cluster;
 use PVE::Storage;
+use PVE::Storage::Plugin;
 use PVE::QemuServer;
 use Time::HiRes qw( usleep );
 use PVE::RPCEnvironment;
@@ -466,9 +467,12 @@ sub sync_disks {
next if $rep_volumes->{$volid};
push @{$self->{volumes}}, $volid;
 
+   my $targetvolname = undef;
if (defined($override_targetsid) && $self->{running} && $ref eq 
'storage') {
-   my (undef, $targetvolname) = 
PVE::Storage::parse_volume_id($volid);
+   my $scfg = PVE::Storage::storage_config($self->{storecfg}, 
$targetsid);
+   $targetvolname = 
PVE::Storage::Plugin::get_next_vm_diskname($self->{online_unused_volumes}, 
$targetsid, $vmid, undef, $scfg, 0);
push @{$self->{online_unused_volumes}}, 
"${targetsid}:${targetvolname}";
+   $self->log('info', "$volid will become 
${targetsid}:${targetvolname} on the target node");
}
 
my $opts = $self->{opts};
@@ -480,7 +484,7 @@ sub sync_disks {
$bwlimit = $bwlimit * 1024 if defined($bwlimit);
 
PVE::Storage::storage_migrate($self->{storecfg}, $volid, 
$self->{ssh_info}, $targetsid,
- undef, undef, undef, $bwlimit, 
$insecure, $with_snapshots);
+ $targetvolname, undef, undef, 
$bwlimit, $insecure, $with_snapshots);
}
}
 };
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server 1/3] Update unused volumes in config when doing online migration with --targetstorage

2019-10-30 Thread Fabian Ebner
When doing an online migration with --targetstorage unused disks get migrated
to the specified target storage as well.
With this patch we keep track of those volumes and update the VM config with
their new locations. Unused volumes of the VM previously not present in the
config are added as well.

Signed-off-by: Fabian Ebner 
---

Changes from v1:
* Check explicitly if it is an online migration and that the volume
  is referenced by the storage rather than something else
* Patch 3 fixes another typo

 PVE/QemuMigrate.pm | 16 
 1 file changed, 16 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 626b837..045f3b0 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -465,6 +465,12 @@ sub sync_disks {
} else {
next if $rep_volumes->{$volid};
push @{$self->{volumes}}, $volid;
+
+   if (defined($override_targetsid) && $self->{running} && $ref eq 
'storage') {
+   my (undef, $targetvolname) = 
PVE::Storage::parse_volume_id($volid);
+   push @{$self->{online_unused_volumes}}, 
"${targetsid}:${targetvolname}";
+   }
+
my $opts = $self->{opts};
my $insecure = $opts->{migration_type} eq 'insecure';
my $with_snapshots = $local_volumes->{$volid}->{snapshots};
@@ -958,6 +964,16 @@ sub phase3_cleanup {
}
 }
 
+if ($self->{online_unused_volumes}) {
+   foreach my $conf_key (keys %{$conf}) {
+   delete $conf->{$conf_key} if ($conf_key =~ m/^unused\d+$/);
+   }
+   foreach my $targetvolid (@{$self->{online_unused_volumes}}) {
+   PVE::QemuConfig->add_unused_volume($conf, $targetvolid);
+   }
+   PVE::QemuConfig->write_config($vmid, $conf);
+}
+
 # transfer replication state before move config
 $self->transfer_replication_state() if $self->{replicated_volumes};
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 qemu-server 04/11] refactor: create QemuSchema and move file/dir code

2019-10-30 Thread Fabian Grünbichler
On October 28, 2019 12:59 pm, Stefan Reiter wrote:
> Also merge the 'mkdir's from QemuServer and QemuConfig to reduce
> duplication (both modules depend on QemuSchema anyway).
> 
> nodename() is still called in multiple modules, but since it's cached by
> the INotify module it doesn't really matter.
> 
> Signed-off-by: Stefan Reiter 
> ---
> 
> QemuSchema is pretty small right now, but it could hold much more of the 
> static
> setup code from QemuServer.pm (JSONSchema formats and the like). This patch 
> only
> moves the necessary stuff for the rest of the series to not need cyclic 
> depends.
> 
> I want to refactor more into this in the future, but for now I'd like to wait
> for my CPU series, since that also touches some schema stuff.

I get where you are coming from here (and splitting the schema 
definition from all the stuff implemented in QemuServer.pm would be a 
huge win!), but IMHO the three methods you move here are not really 
schema-related - they are just VMID to local path mappings? is there 
some other broader category that we could put them into (except some 
awful generic 'helper' module ;)) ?

> 
>  PVE/CLI/qm.pm |  3 ++-
>  PVE/Makefile  |  3 ++-
>  PVE/QMPClient.pm  |  5 +++--
>  PVE/QemuConfig.pm | 10 ++
>  PVE/QemuSchema.pm | 35 +++
>  PVE/QemuServer.pm | 41 -
>  6 files changed, 52 insertions(+), 45 deletions(-)
>  create mode 100644 PVE/QemuSchema.pm
> 
> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
> index ea74ad5..44beac9 100755
> --- a/PVE/CLI/qm.pm
> +++ b/PVE/CLI/qm.pm
> @@ -21,6 +21,7 @@ use PVE::RPCEnvironment;
>  use PVE::Exception qw(raise_param_exc);
>  use PVE::Network;
>  use PVE::GuestHelpers;
> +use PVE::QemuSchema;
>  use PVE::QemuServer;
>  use PVE::QemuServer::ImportDisk;
>  use PVE::QemuServer::OVF;
> @@ -209,7 +210,7 @@ __PACKAGE__->register_method ({
>   my ($param) = @_;
>  
>   my $vmid = $param->{vmid};
> - my $vnc_socket = PVE::QemuServer::vnc_socket($vmid);
> + my $vnc_socket = PVE::QemuSchema::vnc_socket($vmid);
>  
>   if (my $ticket = $ENV{LC_PVE_TICKET}) {  # NOTE: ssh on debian only 
> pass LC_* variables
>   PVE::QemuServer::vm_mon_cmd($vmid, "change", device => 'vnc', 
> target => "unix:$vnc_socket,password");
> diff --git a/PVE/Makefile b/PVE/Makefile
> index dc17368..5ec715e 100644
> --- a/PVE/Makefile
> +++ b/PVE/Makefile
> @@ -2,7 +2,8 @@ PERLSOURCE =  \
>   QemuServer.pm   \
>   QemuMigrate.pm  \
>   QMPClient.pm\
> - QemuConfig.pm
> + QemuConfig.pm   \
> + QemuSchema.pm   \
>  
>  .PHONY: install
>  install:
> diff --git a/PVE/QMPClient.pm b/PVE/QMPClient.pm
> index 570dba2..188c6d7 100644
> --- a/PVE/QMPClient.pm
> +++ b/PVE/QMPClient.pm
> @@ -2,6 +2,7 @@ package PVE::QMPClient;
>  
>  use strict;
>  use warnings;
> +use PVE::QemuSchema;
>  use PVE::QemuServer;
>  use IO::Multiplex;
>  use POSIX qw(EINTR EAGAIN);
> @@ -58,7 +59,7 @@ my $push_cmd_to_queue = sub {
>  
>  my $qga = ($execute =~ /^guest\-+/) ? 1 : 0;
>  
> -my $sname = PVE::QemuServer::qmp_socket($vmid, $qga);
> +my $sname = PVE::QemuSchema::qmp_socket($vmid, $qga);
>  
>  $self->{queue_info}->{$sname} = { qga => $qga, vmid => $vmid, sname => 
> $sname, cmds => [] }
>  if !$self->{queue_info}->{$sname};
> @@ -186,7 +187,7 @@ my $open_connection = sub {
>  my $vmid = $queue_info->{vmid};
>  my $qga = $queue_info->{qga};
>  
> -my $sname = PVE::QemuServer::qmp_socket($vmid, $qga);
> +my $sname = PVE::QemuSchema::qmp_socket($vmid, $qga);
>  
>  $timeout = 1 if !$timeout;
>  
> diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
> index e9796a3..b63e57c 100644
> --- a/PVE/QemuConfig.pm
> +++ b/PVE/QemuConfig.pm
> @@ -5,6 +5,7 @@ use warnings;
>  
>  use PVE::AbstractConfig;
>  use PVE::INotify;
> +use PVE::QemuSchema;
>  use PVE::QemuServer;
>  use PVE::Storage;
>  use PVE::Tools;
> @@ -13,13 +14,6 @@ use base qw(PVE::AbstractConfig);
>  
>  my $nodename = PVE::INotify::nodename();
>  
> -mkdir "/etc/pve/nodes/$nodename";
> -my $confdir = "/etc/pve/nodes/$nodename/qemu-server";
> -mkdir $confdir;
> -
> -my $lock_dir = "/var/lock/qemu-server";
> -mkdir $lock_dir;
> -
>  my $MAX_UNUSED_DISKS = 256;
>  
>  # BEGIN implemented abstract methods from PVE::AbstractConfig
> @@ -37,7 +31,7 @@ sub __config_max_unused_disks {
>  sub config_file_lock {
>  my ($class, $vmid) = @_;
>  
> -return "$lock_dir/lock-$vmid.conf";
> +return "$PVE::QemuSchema::lock_dir/lock-$vmid.conf";
>  }
>  
>  sub cfs_config_path {
> diff --git a/PVE/QemuSchema.pm b/PVE/QemuSchema.pm
> new file mode 100644
> index 000..446177d
> --- /dev/null
> +++ b/PVE/QemuSchema.pm
> @@ -0,0 +1,35 @@
> +package PVE::QemuSchema;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::INotify;
> +
> +my $nodename = PVE::INotify::nodename();
> +mkdir "/etc/pve/nodes/$nodename";
> +my $

Re: [pve-devel] [PATCH v2 qemu-server 05/11] refactor: Move check_running to QemuConfig

2019-10-30 Thread Fabian Grünbichler
On October 28, 2019 12:59 pm, Stefan Reiter wrote:
> Also move check_cmdline, since check_running is its only user. Changes
> all uses of check_running in QemuServer, including mocking in snapshot
> tests.
> 
> Signed-off-by: Stefan Reiter 
> ---
>  PVE/API2/Qemu.pm | 32 +++---
>  PVE/CLI/qm.pm| 13 +++---
>  PVE/QemuConfig.pm| 65 ++-
>  PVE/QemuMigrate.pm   |  3 +-
>  PVE/QemuServer.pm| 85 ++--
>  PVE/QemuServer/Agent.pm  |  3 +-
>  PVE/QemuServer/ImportDisk.pm |  3 +-
>  PVE/QemuServer/Memory.pm |  3 +-
>  PVE/VZDump/QemuServer.pm |  7 +--
>  test/snapshot-test.pm|  7 +--
>  10 files changed, 116 insertions(+), 105 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index b2c0b0d..9912e4d 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -556,7 +556,7 @@ __PACKAGE__->register_method({
>  
>   PVE::QemuConfig->check_protection($conf, $emsg);
>  
> - die "$emsg vm is running\n" if 
> PVE::QemuServer::check_running($vmid);
> + die "$emsg vm is running\n" if 
> PVE::QemuConfig::check_running($vmid);
>  
>   my $realcmd = sub {
>   PVE::QemuServer::restore_archive($archive, $vmid, $authuser, {
> @@ -1220,7 +1220,7 @@ my $update_vm_api  = sub {
>  
>   return if !scalar(keys %{$conf->{pending}});
>  
> - my $running = PVE::QemuServer::check_running($vmid);
> + my $running = PVE::QemuConfig::check_running($vmid);
>  
>   # apply pending changes
>  
> @@ -1439,7 +1439,7 @@ __PACKAGE__->register_method({
>  
>   # early tests (repeat after locking)
>   die "VM $vmid is running - destroy failed\n"
> - if PVE::QemuServer::check_running($vmid);
> + if PVE::QemuConfig::check_running($vmid);
>  
>   my $realcmd = sub {
>   my $upid = shift;
> @@ -1447,7 +1447,7 @@ __PACKAGE__->register_method({
>   syslog('info', "destroy VM $vmid: $upid\n");
>   PVE::QemuConfig->lock_config($vmid, sub {
>   die "VM $vmid is running - destroy failed\n"
> - if (PVE::QemuServer::check_running($vmid));
> + if (PVE::QemuConfig::check_running($vmid));
>  
>   PVE::QemuServer::destroy_vm($storecfg, $vmid, 1, $skiplock);
>  
> @@ -2179,7 +2179,7 @@ __PACKAGE__->register_method({
>   raise_param_exc({ skiplock => "Only root may use this option." })
>   if $skiplock && $authuser ne 'root@pam';
>  
> - die "VM $vmid not running\n" if !PVE::QemuServer::check_running($vmid);
> + die "VM $vmid not running\n" if !PVE::QemuConfig::check_running($vmid);
>  
>   my $realcmd = sub {
>   my $upid = shift;
> @@ -2349,7 +2349,7 @@ __PACKAGE__->register_method({
>   die "VM is paused - cannot shutdown\n";
>   }
>  
> - die "VM $vmid not running\n" if !PVE::QemuServer::check_running($vmid);
> + die "VM $vmid not running\n" if !PVE::QemuConfig::check_running($vmid);
>  
>   my $realcmd = sub {
>   my $upid = shift;
> @@ -2413,7 +2413,7 @@ __PACKAGE__->register_method({
>   raise_param_exc({ skiplock => "Only root may use this option." })
>   if $skiplock && $authuser ne 'root@pam';
>  
> - die "VM $vmid not running\n" if !PVE::QemuServer::check_running($vmid);
> + die "VM $vmid not running\n" if !PVE::QemuConfig::check_running($vmid);
>  
>   die "Cannot suspend HA managed VM to disk\n"
>   if $todisk && PVE::HA::Config::vm_is_ha_managed($vmid);
> @@ -2482,7 +2482,7 @@ __PACKAGE__->register_method({
>   };
>  
>   die "VM $vmid not running\n"
> - if !$to_disk_suspended && !PVE::QemuServer::check_running($vmid, 
> $nocheck);
> + if !$to_disk_suspended && !PVE::QemuConfig::check_running($vmid, 
> $nocheck);
>  
>   my $realcmd = sub {
>   my $upid = shift;
> @@ -2592,7 +2592,7 @@ __PACKAGE__->register_method({
>  
>   my $feature = extract_param($param, 'feature');
>  
> - my $running = PVE::QemuServer::check_running($vmid);
> + my $running = PVE::QemuConfig::check_running($vmid);
>  
>   my $conf = PVE::QemuConfig->load_config($vmid);
>  
> @@ -2739,7 +2739,7 @@ __PACKAGE__->register_method({
>  
>  PVE::Cluster::check_cfs_quorum();
>  
> - my $running = PVE::QemuServer::check_running($vmid) || 0;
> + my $running = PVE::QemuConfig::check_running($vmid) || 0;
>  
>   # exclusive lock if VM is running - else shared lock is enough;
>   my $shared_lock = $running ? 0 : 1;
> @@ -2753,7 +2753,7 @@ __PACKAGE__->register_method({
>  
>   PVE::QemuConfig->check_lock($conf);
>  
> - my $verify_running = PVE::QemuServer::check_running($vmid) || 0;
> + my $verify_running = PVE::QemuConfig::check_running($vmid) || 0;
>  
>   die "unexpected state change\n" if $verify_running != $running;
>  
> @@ -3059,7 +3059,7 @@ __PACKA

Re: [pve-devel] [PATCH v2 qemu-server 06/11] refactor: create PVE::QMP for high-level QMP access

2019-10-30 Thread Fabian Grünbichler
On October 28, 2019 12:59 pm, Stefan Reiter wrote:
> ...in addition to PVE::QMPClient for low-level.

I think I'd rather name this PVE::Qemu::Monitor , since it does both HMP 
and QMP ;) it probably makes sense to rename some of the methods, since 
we need to change all the call sites anyway

PVE::QemuServer::vm_mon_cmd
PVE::Qemu::Monitor::mon_cmd

PVE::QemuServer::vm_mon_cmd_nocheck
PVE::Qemu::Monitor::mon_cmd_nocheck

PVE::QemuServer::vm_qmp_commandd
PVE::Qemu::Monitor::qmp_cmd

PVE::QemuServer::vm_human_monitor_command
PVE::Qemu::Monitor::hmp_cmd

all of which are equal or shorter than their old variants with 
PVE::QemuServer:: prefix ;) for the first two (which are the most used) 
I'd keep the export - the latter two are not used that much?

> 
> Also move all references (most with exports, the methods are used a lot
> and have unique enough names IMO) and fix tests.
> 
> References in __snapshot_create_vol_snapshots_hook (in QemuConfig) is an
> exception, as using the exported functions breaks tests.
> 
> Signed-off-by: Stefan Reiter 
> ---
>  PVE/API2/Qemu.pm | 13 
>  PVE/API2/Qemu/Agent.pm   |  7 ++--
>  PVE/CLI/qm.pm| 11 +++---
>  PVE/Makefile |  1 +
>  PVE/QMP.pm   | 72 
>  PVE/QemuConfig.pm| 15 +
>  PVE/QemuMigrate.pm   | 21 ++--
>  PVE/QemuServer.pm| 66 
>  PVE/QemuServer/Agent.pm  |  3 +-
>  PVE/QemuServer/Memory.pm |  9 ++---
>  PVE/VZDump/QemuServer.pm | 13 
>  test/snapshot-test.pm| 18 +++---
>  12 files changed, 142 insertions(+), 107 deletions(-)
>  create mode 100644 PVE/QMP.pm
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 9912e4d..50a0592 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -21,6 +21,7 @@ use PVE::GuestHelpers;
>  use PVE::QemuConfig;
>  use PVE::QemuServer;
>  use PVE::QemuMigrate;
> +use PVE::QMP qw(vm_mon_cmd vm_qmp_command);
>  use PVE::RPCEnvironment;
>  use PVE::AccessControl;
>  use PVE::INotify;
> @@ -1835,8 +1836,8 @@ __PACKAGE__->register_method({
>   my ($ticket, undef, $remote_viewer_config) =
>   PVE::AccessControl::remote_viewer_config($authuser, $vmid, $node, 
> $proxy, $title, $port);
>  
> - PVE::QemuServer::vm_mon_cmd($vmid, "set_password", protocol => 'spice', 
> password => $ticket);
> - PVE::QemuServer::vm_mon_cmd($vmid, "expire_password", protocol => 
> 'spice', time => "+30");
> + vm_mon_cmd($vmid, "set_password", protocol => 'spice', password => 
> $ticket);
> + vm_mon_cmd($vmid, "expire_password", protocol => 'spice', time => 
> "+30");
>  
>   return $remote_viewer_config;
>  }});
> @@ -2261,7 +2262,7 @@ __PACKAGE__->register_method({
>   # checking the qmp status here to get feedback to the gui/cli/api
>   # and the status query should not take too long
>   my $qmpstatus = eval {
> - PVE::QemuServer::vm_qmp_command($vmid, { execute => "query-status" 
> }, 0);
> + vm_qmp_command($vmid, { execute => "query-status" }, 0);
>   };
>   my $err = $@ if $@;
>  
> @@ -2341,7 +2342,7 @@ __PACKAGE__->register_method({
>   my $vmid = extract_param($param, 'vmid');
>  
>   my $qmpstatus = eval {
> - PVE::QemuServer::vm_qmp_command($vmid, { execute => "query-status" 
> }, 0);
> + vm_qmp_command($vmid, { execute => "query-status" }, 0);
>   };
>   my $err = $@ if $@;
>  
> @@ -3093,7 +3094,7 @@ __PACKAGE__->register_method({
>   PVE::QemuConfig->write_config($vmid, $conf);
>  
>   if ($running && 
> PVE::QemuServer::parse_guest_agent($conf)->{fstrim_cloned_disks} && 
> PVE::QemuServer::qga_check_running($vmid)) {
> - eval { PVE::QemuServer::vm_mon_cmd($vmid, 
> "guest-fstrim"); };
> + eval { vm_mon_cmd($vmid, "guest-fstrim"); };
>   }
>  
>   eval {
> @@ -3449,7 +3450,7 @@ __PACKAGE__->register_method({
>  
>   my $res = '';
>   eval {
> - $res = PVE::QemuServer::vm_human_monitor_command($vmid, 
> $param->{command});
> + $res = PVE::QMP::vm_human_monitor_command($vmid, $param->{command});
>   };
>   $res = "ERROR: $@" if $@;
>  
> diff --git a/PVE/API2/Qemu/Agent.pm b/PVE/API2/Qemu/Agent.pm
> index 839146c..da7111e 100644
> --- a/PVE/API2/Qemu/Agent.pm
> +++ b/PVE/API2/Qemu/Agent.pm
> @@ -7,6 +7,7 @@ use PVE::RESTHandler;
>  use PVE::JSONSchema qw(get_standard_option);
>  use PVE::QemuServer;
>  use PVE::QemuServer::Agent qw(agent_available agent_cmd check_agent_error);
> +use PVE::QMP qw(vm_mon_cmd);
>  use MIME::Base64 qw(encode_base64 decode_base64);
>  use JSON;
>  
> @@ -190,7 +191,7 @@ sub register_command {
>   agent_available($vmid, $conf);
>  
>   my $cmd = $param->{command} // $command;
> - my $res = PVE::QemuServer::vm_mon_cmd($vmid, "guest-$cmd");
> + my $res = vm_mon_cmd($vmid, 

[pve-devel] applied: [PATCH manager] ui: factor out pending changes revert button

2019-10-30 Thread Thomas Lamprecht
makes no sense to have the, more or less, exact same 25 line method 5
times..

could be moved to widget TK, but that's for another time.

Signed-off-by: Thomas Lamprecht 
---
 www/manager6/Makefile |  1 +
 www/manager6/button/Revert.js | 38 +++
 www/manager6/lxc/DNS.js   | 32 ++
 www/manager6/lxc/Options.js   | 31 ++---
 www/manager6/lxc/Resources.js | 25 ++--
 www/manager6/qemu/HardwareView.js | 26 +++--
 www/manager6/qemu/Options.js  | 33 +++
 7 files changed, 51 insertions(+), 135 deletions(-)
 create mode 100644 www/manager6/button/Revert.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index aa460c3b..b7ffc44b 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -8,6 +8,7 @@ JSSRC=  \
menu/MenuItem.js\
menu/TemplateMenu.js\
button/ConsoleButton.js \
+   button/Revert.js\
button/Split.js \
controller/StorageEdit.js   \
qemu/CmdMenu.js \
diff --git a/www/manager6/button/Revert.js b/www/manager6/button/Revert.js
new file mode 100644
index ..3d846c6c
--- /dev/null
+++ b/www/manager6/button/Revert.js
@@ -0,0 +1,38 @@
+Ext.define('PVE.button.PendingRevert', {
+extend: 'Proxmox.button.Button',
+alias: 'widget.pvePendingRevertButton',
+
+text: gettext('Revert'),
+disabled: true,
+config: {
+   pendingGrid: null,
+   baseurl: undefined,
+},
+
+handler: function() {
+   let view = this.pendingGrid;
+
+   let rec = view.getSelectionModel().getSelection()[0];
+   if (!rec) return;
+
+   let rowdef = view.rows[rec.data.key] || {};
+   let keys = rowdef.multiKey ||  [ rec.data.key ];
+
+   Proxmox.Utils.API2Request({
+   url: this.baseurl || view.editorConfig.url,
+   waitMsgTarget: view,
+   selModel: view.getSelectionModel(),
+   method: 'PUT',
+   params: {
+   'revert': keys.join(',')
+   },
+   callback: () => view.reload(),
+   failure: (response) => Ext.Msg.alert('Error', response.htmlStatus),
+   });
+},
+
+initComponent: function() {
+   if (!this.pendingGrid) throw "revert button requires a pendingGrid";
+   this.callParent(arguments);
+},
+});
diff --git a/www/manager6/lxc/DNS.js b/www/manager6/lxc/DNS.js
index f5b85db5..bf110f09 100644
--- a/www/manager6/lxc/DNS.js
+++ b/www/manager6/lxc/DNS.js
@@ -213,38 +213,10 @@ Ext.define('PVE.lxc.DNS', {
handler: run_editor
});
 
-   var revert_btn = new Proxmox.button.Button({
-   text: gettext('Revert'),
-   disabled: true,
-   handler: function() {
-   var sm = me.getSelectionModel();
-   var rec = sm.getSelection()[0];
-   if (!rec) {
-   return;
-   }
-
-   var rowdef = me.rows[rec.data.key] || {};
-   var keys = rowdef.multiKey ||  [ rec.data.key ];
-   var revert = keys.join(',');
-
-   Proxmox.Utils.API2Request({
-   url: '/api2/extjs/' + baseurl,
-   waitMsgTarget: me,
-   method: 'PUT',
-   params: {
-   'revert': revert
-   },
-   callback: function() {
-   me.reload();
-   },
-   failure: function (response, opts) {
-   Ext.Msg.alert('Error',response.htmlStatus);
-   }
-   });
-   }
+   var revert_btn = new PVE.button.PendingRevert({
+   pendingGrid: me,
});
 
-
var set_button_status = function() {
var sm = me.getSelectionModel();
var rec = sm.getSelection()[0];
diff --git a/www/manager6/lxc/Options.js b/www/manager6/lxc/Options.js
index 8ed3a5fc..409f8b70 100644
--- a/www/manager6/lxc/Options.js
+++ b/www/manager6/lxc/Options.js
@@ -161,35 +161,8 @@ Ext.define('PVE.lxc.Options', {
handler: function() { me.run_editor(); }
});
 
-   var revert_btn = new Proxmox.button.Button({
-   text: gettext('Revert'),
-   disabled: true,
-   handler: function() {
-   var sm = me.getSelectionModel();
-   var rec = sm.getSelection()[0];
-   if (!rec) {
-   return;
-   }
-
-   var rowdef = me.rows[rec.data.key] || {};
-   var keys = rowdef.multiKey ||  [ rec.data.key ];
-   var revert = keys.join(',');
-
-   Proxmox.Utils.API2Request({
- 

Re: [pve-devel] [PATCH v2 qemu-server 07/11] refactor: extract QEMU machine related helpers to package

2019-10-30 Thread Fabian Grünbichler
On October 28, 2019 12:59 pm, Stefan Reiter wrote:
> ...PVE::QemuServer::Machine.
> 
> qemu_machine_feature_enabled is exported since it has a *lot* of users
> in PVE::QemuServer and a long enough name as it is.
> 
> Signed-off-by: Stefan Reiter 
> ---
> 
> Not sure if PVE::QemuMachine wouldn't be a better package name. I'm fine with
> both (or other suggestions), if someone has preferences.

like the others, I'd suggest

PVE::Qemu::Machine

we'd probably not need the export then either, since
qemu_machine_feature_enabled
PVE::Qemu::Machine::feature_enabled
PVE::Qemu::Machine::has_feature

is not much of a difference? see below/inline as well ;)

> 
>  PVE/QemuConfig.pm |   3 +-
>  PVE/QemuMigrate.pm|   3 +-
>  PVE/QemuServer.pm | 101 +++---
>  PVE/QemuServer/Machine.pm | 100 +
>  PVE/QemuServer/Makefile   |   1 +
>  PVE/VZDump/QemuServer.pm  |   3 +-
>  6 files changed, 115 insertions(+), 96 deletions(-)
>  create mode 100644 PVE/QemuServer/Machine.pm
> 
> diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
> index 06ace83..e7af9ad 100644
> --- a/PVE/QemuConfig.pm
> +++ b/PVE/QemuConfig.pm
> @@ -11,6 +11,7 @@ use PVE::INotify;
>  use PVE::ProcFSTools;
>  use PVE::QemuSchema;
>  use PVE::QemuServer;
> +use PVE::QemuServer::Machine;
>  use PVE::QMP qw(vm_mon_cmd vm_mon_cmd_nocheck);
>  use PVE::Storage;
>  use PVE::Tools;
> @@ -150,7 +151,7 @@ sub __snapshot_save_vmstate {
>  $name .= ".raw" if $scfg->{path}; # add filename extension for file base 
> storage
>  
>  my $statefile = PVE::Storage::vdisk_alloc($storecfg, $target, $vmid, 
> 'raw', $name, $size*1024);
> -my $runningmachine = PVE::QemuServer::get_current_qemu_machine($vmid);
> +my $runningmachine = 
> PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
>  
>  if ($suspend) {
>   $conf->{vmstate} = $statefile;
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index aea7eac..9ac78f8 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -12,6 +12,7 @@ use PVE::Cluster;
>  use PVE::Storage;
>  use PVE::QemuConfig;
>  use PVE::QemuServer;
> +use PVE::QemuServer::Machine;
>  use PVE::QMP qw(vm_mon_cmd vm_mon_cmd_nocheck);
>  use Time::HiRes qw( usleep );
>  use PVE::RPCEnvironment;
> @@ -217,7 +218,7 @@ sub prepare {
>   die "can't migrate running VM without --online\n" if !$online;
>   $running = $pid;
>  
> - $self->{forcemachine} = PVE::QemuServer::qemu_machine_pxe($vmid, $conf);
> + $self->{forcemachine} = 
> PVE::QemuServer::Machine::qemu_machine_pxe($vmid, $conf);
>  
>  }
>  my $loc_res = PVE::QemuServer::check_local_resources($conf, 1);
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index ed137fc..20a6380 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -42,6 +42,7 @@ use PVE::QMPClient;
>  use PVE::QemuConfig;
>  use PVE::QemuSchema;
>  use PVE::QemuServer::Cloudinit;
> +use PVE::QemuServer::Machine qw(qemu_machine_feature_enabled);
>  use PVE::QemuServer::Memory;
>  use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr 
> print_pcie_root_port);
>  use PVE::QemuServer::USB qw(parse_usb_device);
> @@ -1827,20 +1828,14 @@ sub path_is_scsi {
>  return $res;
>  }
>  
> -sub machine_type_is_q35 {
> -my ($conf) = @_;
> -
> -return $conf->{machine} && ($conf->{machine} =~ m/q35/) ? 1 : 0;
> -}
> -
>  sub print_tabletdevice_full {
>  my ($conf, $arch) = @_;
>  
> -my $q35 = machine_type_is_q35($conf);
> +my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
>  
>  # we use uhci for old VMs because tablet driver was buggy in older qemu
>  my $usbbus;
> -if (machine_type_is_q35($conf) || $arch eq 'aarch64') {
> +if (PVE::QemuServer::Machine::machine_type_is_q35($conf) || $arch eq 
> 'aarch64') {
>   $usbbus = 'ehci';
>  } else {
>   $usbbus = 'uhci';
> @@ -2189,7 +2184,7 @@ sub print_vga_device {
>   $memory = ",ram_size=67108864,vram_size=33554432";
>  }
>  
> -my $q35 = machine_type_is_q35($conf);
> +my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
>  my $vgaid = "vga" . ($id // '');
>  my $pciaddr;
>  
> @@ -3478,7 +3473,7 @@ sub config_to_command {
>  
>  die "detected old qemu-kvm binary ($kvmver)\n" if $vernum < 15000;
>  
> -my $q35 = machine_type_is_q35($conf);
> +my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
>  my $hotplug_features = parse_hotplug_features(defined($conf->{hotplug}) 
> ? $conf->{hotplug} : '1');
>  my $use_old_bios_files = undef;
>  ($use_old_bios_files, $machine_type) = 
> qemu_use_old_bios_files($machine_type);
> @@ -4112,7 +4107,7 @@ sub vm_devices_list {
>  sub vm_deviceplug {
>  my ($storecfg, $conf, $vmid, $deviceid, $device, $arch, $machine_type) = 
> @_;
>  
> -my $q35 = machine_type_is_q35($conf);
> +my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
>  
>  my $d

Re: [pve-devel] [PATCH v3 manager] gui: add revert button for lxc pending changes

2019-10-30 Thread Thomas Lamprecht
On 10/29/19 3:38 PM, Oguz Bektas wrote:
> adds the pending button for Resources, Options and DNS screens.
> 
> Co-developed-by: Dominik Csapak 
> Signed-off-by: Oguz Bektas 
> ---
> 
> v2->v3:
> * use getStore() instead of rstore while checking for datachanged, in
> light of Dominik's debugging (thanks!)
> * add missing startUpdate to DNS.js
> * remove FIXME comment about rstore refresh
> 
> 
>  www/manager6/lxc/DNS.js   | 48 +++--
>  www/manager6/lxc/Options.js   | 58 +--
>  www/manager6/lxc/Resources.js | 31 ++-
>  3 files changed, 132 insertions(+), 5 deletions(-)

adding the revert_btn, a 26 line method, three times in a single patch,
while the exact same method already exists two times in the code basis,
should really ring some bells..

I'd had liked to get this cleaned up before this gets into the three,
but as it's already pushed out I made a followup (see pve-devel).

> 
> diff --git a/www/manager6/lxc/DNS.js b/www/manager6/lxc/DNS.js
> index 89e2c694..8bde72f9 100644
> --- a/www/manager6/lxc/DNS.js
> +++ b/www/manager6/lxc/DNS.js
> @@ -213,6 +213,38 @@ Ext.define('PVE.lxc.DNS', {
>   handler: run_editor
>   });
>  
> + var revert_btn = new Proxmox.button.Button({
> + text: gettext('Revert'),
> + disabled: true,
> + handler: function() {
> + var sm = me.getSelectionModel();
> + var rec = sm.getSelection()[0];
> + if (!rec) {
> + return;
> + }
> +
> + var rowdef = me.rows[rec.data.key] || {};
> + var keys = rowdef.multiKey ||  [ rec.data.key ];
> + var revert = keys.join(',');
> +
> + Proxmox.Utils.API2Request({
> + url: '/api2/extjs/' + baseurl,
> + waitMsgTarget: me,
> + method: 'PUT',
> + params: {
> + 'revert': revert
> + },
> + callback: function() {
> + me.reload();
> + },
> + failure: function (response, opts) {
> + Ext.Msg.alert('Error',response.htmlStatus);
> + }
> + });
> + }
> + });
> +
> +
>   var set_button_status = function() {
>   var sm = me.getSelectionModel();
>   var rec = sm.getSelection()[0];
> @@ -221,16 +253,20 @@ Ext.define('PVE.lxc.DNS', {
>   edit_btn.disable();
>   return;
>   }
> - var rowdef = rows[rec.data.key];
> + var key = rec.data.key;
> + var rowdef = rows[key];
> + var pending = rec.data['delete'] || me.hasPendingChanges(key);
>   edit_btn.setDisabled(!rowdef.editor);
> + revert_btn.setDisabled(!pending);
>   };
>  
>   Ext.apply(me, {
>   url: "/api2/json/nodes/" + nodename + "/lxc/" + vmid + "/pending",
>   selModel: sm,
>   cwidth1: 150,
> + interval: 5000,
>   run_editor: run_editor,
> - tbar: [ edit_btn ],
> + tbar: [ edit_btn, revert_btn ],
>   rows: rows,
>   editorConfig: {
>   url: "/api2/extjs/" + baseurl
> @@ -243,5 +279,13 @@ Ext.define('PVE.lxc.DNS', {
>   });
>  
>   me.callParent();
> +
> + me.on('activate', me.rstore.startUpdate);
> + me.on('destroy', me.rstore.stopUpdate);
> + me.on('deactivate', me.rstore.stopUpdate);
> +
> + me.mon(me.getStore(), 'datachanged', function() {
> + set_button_status();
> + });
>  }
>  });
> diff --git a/www/manager6/lxc/Options.js b/www/manager6/lxc/Options.js
> index 5e1e0222..8ed3a5fc 100644
> --- a/www/manager6/lxc/Options.js
> +++ b/www/manager6/lxc/Options.js
> @@ -161,17 +161,67 @@ Ext.define('PVE.lxc.Options', {
>   handler: function() { me.run_editor(); }
>   });
>  
> + var revert_btn = new Proxmox.button.Button({
> + text: gettext('Revert'),
> + disabled: true,
> + handler: function() {
> + var sm = me.getSelectionModel();
> + var rec = sm.getSelection()[0];
> + if (!rec) {
> + return;
> + }
> +
> + var rowdef = me.rows[rec.data.key] || {};
> + var keys = rowdef.multiKey ||  [ rec.data.key ];
> + var revert = keys.join(',');
> +
> + Proxmox.Utils.API2Request({
> + url: '/api2/extjs/' + baseurl,
> + waitMsgTarget: me,
> + method: 'PUT',
> + params: {
> + 'revert': revert
> + },
> + callback: function() {
> + me.reload();
> + },
> + failure: function (response, opts) {
> + Ext.Msg.alert('Error',response.htmlStatus);
> + }
> + });
> + }
> + });
> +
> + var s

Re: [pve-devel] [PATCH v2 00/11] Refactor QemuServer to avoid dependency cycles

2019-10-30 Thread Fabian Grünbichler
On October 28, 2019 12:59 pm, Stefan Reiter wrote:
> First 3 patches are independant refactorings around get_host_arch.
> 
> Rest of the series refactors QemuServer and creates three new packages:
> * 'PVE::QemuSchema' for schema related code and common directory creation
> * 'PVE::QMP' for higher-level QMP functions
> * 'PVE::QemuServer::Machine' for QEMU machine-type related helpers
> 
> This refactoring came along because qemu_machine_feature_enabled needs to be
> used in 'PVE::QemuServer::CPUConfig', a new package that will be introduced 
> with
> my custom CPU series [0]. This would currently require dependency cycles, but 
> by
> extracting the code in this series and splitting it up into multiple helper
> modules, this can be avoided.
> 
> Care was taken not to introduce new dependecy cycles, though this required to
> move the 'check_running' function to QemuConfig.pm, where it doesn't *quite* 
> fit
> IMO, but I also didn't want to create a new module just for this one function.
> Open for ideas ofc.

thanks for this big chunk of work! some comments on individual patches, 
and one high-level remark that we already discussed off-list:

I'd use this opportunity to rename PVE/QemuServer to PVE/Qemu, and move 
all/most of the split out files into PVE/Qemu as well.

we could think about moving QemuConfig and QemuMigrate there as well 
(like we have in pve-container), but that would mean touching even more 
files, including pve-firewall (which uses PVE::QemuConfig).

> 
> v2:
> * Actually test changes correctly - sorry
> * Fix a few package 'use's I missed to move to new packages
> * Fix tests for pve-manager
> * Fix missing '=' in pve-container
> 
> [0] https://pve.proxmox.com/pipermail/pve-devel/2019-October/039608.html
> 
> (@Thomas: I rebased the series just before sending to work with your cleanups)
> 
> 
> common: Stefan Reiter (1):
>   Make get_host_arch return raw uname entry
> 
>  src/PVE/Tools.pm | 17 +
>  1 file changed, 5 insertions(+), 12 deletions(-)
> 
> container: Stefan Reiter (1):
>   Move LXC-specific architecture translation here
> 
>  src/PVE/LXC/Setup.pm | 9 +
>  1 file changed, 9 insertions(+)
> 
> qemu-server: Stefan Reiter (5):
>   Use get_host_arch from PVE::Tools
>   refactor: create QemuSchema and move file/dir code
>   refactor: Move check_running to QemuConfig
>   refactor: create PVE::QMP for high-level QMP access
>   refactor: extract QEMU machine related helpers to package
> 
>  PVE/API2/Qemu.pm |  45 +++---
>  PVE/API2/Qemu/Agent.pm   |   7 +-
>  PVE/CLI/qm.pm|  27 ++--
>  PVE/Makefile |   4 +-
>  PVE/QMP.pm   |  72 +
>  PVE/QMPClient.pm |   5 +-
>  PVE/QemuConfig.pm|  93 +--
>  PVE/QemuMigrate.pm   |  27 ++--
>  PVE/QemuSchema.pm|  35 +
>  PVE/QemuServer.pm| 295 ---
>  PVE/QemuServer/Agent.pm  |   6 +-
>  PVE/QemuServer/ImportDisk.pm |   3 +-
>  PVE/QemuServer/Machine.pm| 100 
>  PVE/QemuServer/Makefile  |   1 +
>  PVE/QemuServer/Memory.pm |  12 +-
>  PVE/VZDump/QemuServer.pm |  23 +--
>  test/snapshot-test.pm|  21 ++-
>  17 files changed, 421 insertions(+), 355 deletions(-)
>  create mode 100644 PVE/QMP.pm
>  create mode 100644 PVE/QemuSchema.pm
>  create mode 100644 PVE/QemuServer/Machine.pm
> 
> ha-manager: Stefan Reiter (2):
>   refactor: check_running was moved to PVE::QemuConfig
>   refactor: vm_qmp_command was moved to PVE::QMP
> 
>  src/PVE/HA/Resources/PVEVM.pm | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> manager: Stefan Reiter (2):
>   refactor: check_running was moved to QemuConfig
>   refactor: vm_mon_cmd was moved to PVE::QMP
> 
>  PVE/API2/Nodes.pm  | 6 +++---
>  PVE/Service/pvestatd.pm| 3 ++-
>  test/ReplicationTestEnv.pm | 2 +-
>  3 files changed, 6 insertions(+), 5 deletions(-)
> 
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 00/11] Refactor QemuServer to avoid dependency cycles

2019-10-30 Thread Thomas Lamprecht
On 10/30/19 11:49 AM, Fabian Grünbichler wrote:
> On October 28, 2019 12:59 pm, Stefan Reiter wrote:
>> First 3 patches are independant refactorings around get_host_arch.
>>
>> Rest of the series refactors QemuServer and creates three new packages:
>> * 'PVE::QemuSchema' for schema related code and common directory creation
>> * 'PVE::QMP' for higher-level QMP functions
>> * 'PVE::QemuServer::Machine' for QEMU machine-type related helpers
>>
>> This refactoring came along because qemu_machine_feature_enabled needs to be
>> used in 'PVE::QemuServer::CPUConfig', a new package that will be introduced 
>> with
>> my custom CPU series [0]. This would currently require dependency cycles, 
>> but by
>> extracting the code in this series and splitting it up into multiple helper
>> modules, this can be avoided.
>>
>> Care was taken not to introduce new dependecy cycles, though this required to
>> move the 'check_running' function to QemuConfig.pm, where it doesn't *quite* 
>> fit
>> IMO, but I also didn't want to create a new module just for this one 
>> function.
>> Open for ideas ofc.
> 
> thanks for this big chunk of work! some comments on individual patches, 
> and one high-level remark that we already discussed off-list:
> 
> I'd use this opportunity to rename PVE/QemuServer to PVE/Qemu, and move 
> all/most of the split out files into PVE/Qemu as well.
We talked about this off-list, and as said there this is something I
thought of too, so ACK. But, I'd rather not do it over the mailinglist
(@Stefan), but that's something I, or you (@Fabian), can do directly in
git and push it out. I see it unrelated to the refactoring also, so not
something you should pay attention to in a v2, Stefan.

> 
> we could think about moving QemuConfig and QemuMigrate there as well 
> (like we have in pve-container), but that would mean touching even more 
> files, including pve-firewall (which uses PVE::QemuConfig).
> 

If we start doing this then fully, a little headache more, or less, is
rather irrelevant - IMO.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel