[pve-devel] applied: [PATCH http-server] Revert "tls: make dh to openssl 1.1 compatible"

2019-10-28 Thread Fabian Grünbichler
thanks for noticing!

On October 25, 2019 5:34 pm, Thomas Lamprecht wrote:
> The libanyevent-perl version 7.140-3 included a fix for this.
> It migrated to the then still testing (buster was not yet released)
> on 07.04.2019, and so we can safely revert this workaround again
> here.
> 
> Albeit this was fixed since Buster was officially released, still
> bump the version dependency to libanyevent-perl in debian/control.
> 
> A future libanyevent-perl will use "ffdhe3072" for DH; another good
> reason to revert this, to not keep hardcoded parameters with possible
> (future) security implications here.
> 
> [0]: 
> https://tracker.debian.org/news/1037514/libanyevent-perl-7140-3-migrated-to-testing/
> 
> This reverts commit ea574439f76bb3914b8b8c0be8e40ee826c95afc.
> 
> Signed-off-by: Thomas Lamprecht 
> ---
>  PVE/APIServer/AnyEvent.pm | 3 ---
>  debian/control| 2 +-
>  2 files changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/PVE/APIServer/AnyEvent.pm b/PVE/APIServer/AnyEvent.pm
> index 9aba27d..539a156 100644
> --- a/PVE/APIServer/AnyEvent.pm
> +++ b/PVE/APIServer/AnyEvent.pm
> @@ -591,9 +591,6 @@ sub proxy_request {
>   sslv2 => 0,
>   sslv3 => 0,
>   verify => 1,
> - # be compatible with openssl 1.1, fix for debian bug #923615
> - # remove once libanyeven-perl with this fix transitions to buster
> - dh => 'schmorp2048',
>   verify_cb => sub {
>   my (undef, undef, undef, $depth, undef, undef, $cert) = @_;
>   # we don't care about intermediate or root certificates
> diff --git a/debian/control b/debian/control
> index a784039..b1409e4 100644
> --- a/debian/control
> +++ b/debian/control
> @@ -11,7 +11,7 @@ Homepage: https://www.proxmox.com
>  Package: libpve-http-server-perl
>  Architecture: all
>  Depends: libanyevent-http-perl,
> - libanyevent-perl,
> + libanyevent-perl (>= 7.140-3),
>   libcrypt-ssleay-perl,
>   libhtml-parser-perl,
>   libhttp-date-perl,
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 1/3] Update unused volumes in config when doing

2019-10-28 Thread Fabian Ebner
When doing an online migration with --targetstorage unused disks get migrated
to the specified target storage as well.
With this patch we keep track of those volumes and update the VM config with
their new locations. Unused volumes of the VM previously not present in the
config are added as well.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 16 
 1 file changed, 16 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 65f39b6..0e9fdcf 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -465,6 +465,12 @@ sub sync_disks {
} else {
next if $rep_volumes->{$volid};
push @{$self->{volumes}}, $volid;
+
+   if (defined($override_targetsid)) {
+   my (undef, $targetvolname) = 
PVE::Storage::parse_volume_id($volid);
+   push @{$self->{online_unused_volumes}}, 
"${targetsid}:${targetvolname}";
+   }
+
my $opts = $self->{opts};
my $insecure = $opts->{migration_type} eq 'insecure';
my $with_snapshots = $local_volumes->{$volid}->{snapshots};
@@ -958,6 +964,16 @@ sub phase3_cleanup {
}
 }
 
+if ($self->{online_unused_volumes}) {
+   foreach my $conf_key (keys %{$conf}) {
+   delete $conf->{$conf_key} if ($conf_key =~ m/^unused\d+$/);
+   }
+   foreach my $targetvolid (@{$self->{online_unused_volumes}}) {
+   PVE::QemuConfig->add_unused_volume($conf, $targetvolid);
+   }
+   PVE::QemuConfig->write_config($vmid, $conf);
+}
+
 # transfer replication state before move config
 $self->transfer_replication_state() if $self->{replicated_volumes};
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 2/3 qemu-server] Avoid collisions of unused disks when doing

2019-10-28 Thread Fabian Ebner
Doing an online migration with --targetstorage and two unused disks with the
same name on different storages failed, because they would collide on the
target storage. This patch makes sure that we don't use the same name twice.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 0e9fdcf..a01f0ca 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -10,6 +10,7 @@ use PVE::INotify;
 use PVE::Tools;
 use PVE::Cluster;
 use PVE::Storage;
+use PVE::Storage::Plugin;
 use PVE::QemuServer;
 use Time::HiRes qw( usleep );
 use PVE::RPCEnvironment;
@@ -466,9 +467,12 @@ sub sync_disks {
next if $rep_volumes->{$volid};
push @{$self->{volumes}}, $volid;
 
+   my $targetvolname = undef;
if (defined($override_targetsid)) {
-   my (undef, $targetvolname) = 
PVE::Storage::parse_volume_id($volid);
+   my $scfg = PVE::Storage::storage_config($self->{storecfg}, 
$targetsid);
+   $targetvolname = 
PVE::Storage::Plugin::get_next_vm_diskname($self->{online_unused_volumes}, 
$targetsid, $vmid, undef, $scfg, 0);
push @{$self->{online_unused_volumes}}, 
"${targetsid}:${targetvolname}";
+   $self->log('info', "$volid will become 
${targetsid}:${targetvolname} on the target node");
}
 
my $opts = $self->{opts};
@@ -480,7 +484,7 @@ sub sync_disks {
$bwlimit = $bwlimit * 1024 if defined($bwlimit);
 
PVE::Storage::storage_migrate($self->{storecfg}, $volid, 
$self->{ssh_info}, $targetsid,
- undef, undef, undef, $bwlimit, 
$insecure, $with_snapshots);
+ $targetvolname, undef, undef, 
$bwlimit, $insecure, $with_snapshots);
}
}
 };
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 3/3 qemu-server] Fix typo

2019-10-28 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index a01f0ca..448f584 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -958,7 +958,7 @@ sub phase3_cleanup {
if (my $err = $@) {
eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid, 
$self->{storage_migration_jobs}) };
eval { PVE::QemuMigrate::cleanup_remotedisks($self) };
-   die "Failed to completed storage migration\n";
+   die "Failed to complete storage migration\n";
} else {
foreach my $target_drive (keys %{$self->{target_drive}}) {
my $drive = PVE::QemuServer::parse_drive($target_drive, 
$self->{target_drive}->{$target_drive}->{volid});
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 1/3] Update unused volumes in config when doing

2019-10-28 Thread Fabian Ebner

On 10/28/19 10:57 AM, Fabian Ebner wrote:

When doing an online migration with --targetstorage unused disks get migrated
to the specified target storage as well.
With this patch we keep track of those volumes and update the VM config with
their new locations. Unused volumes of the VM previously not present in the
config are added as well.

Signed-off-by: Fabian Ebner 
---
  PVE/QemuMigrate.pm | 16 
  1 file changed, 16 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 65f39b6..0e9fdcf 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -465,6 +465,12 @@ sub sync_disks {
} else {
next if $rep_volumes->{$volid};
push @{$self->{volumes}}, $volid;
+
+   if (defined($override_targetsid)) {
+   my (undef, $targetvolname) = 
PVE::Storage::parse_volume_id($volid);
+   push @{$self->{online_unused_volumes}}, 
"${targetsid}:${targetvolname}";
+   }
+
my $opts = $self->{opts};
my $insecure = $opts->{migration_type} eq 'insecure';
my $with_snapshots = $local_volumes->{$volid}->{snapshots};
@@ -958,6 +964,16 @@ sub phase3_cleanup {
}
  }
  
+if ($self->{online_unused_volumes}) {

+   foreach my $conf_key (keys %{$conf}) {
+   delete $conf->{$conf_key} if ($conf_key =~ m/^unused\d+$/);
+   }
+   foreach my $targetvolid (@{$self->{online_unused_volumes}}) {
+   PVE::QemuConfig->add_unused_volume($conf, $targetvolid);
+   }
+   PVE::QemuConfig->write_config($vmid, $conf);
+}
+
  # transfer replication state before move config
  $self->transfer_replication_state() if $self->{replicated_volumes};
  


The subject line for patches 1+2 should end with "when doing online 
migration

with --targetstorage". Sorry for the mistake.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH 3/3 qemu-server] Fix typo

2019-10-28 Thread Thomas Lamprecht
On 10/28/19 10:57 AM, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/QemuMigrate.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index a01f0ca..448f584 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -958,7 +958,7 @@ sub phase3_cleanup {
>   if (my $err = $@) {
>   eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid, 
> $self->{storage_migration_jobs}) };
>   eval { PVE::QemuMigrate::cleanup_remotedisks($self) };
> - die "Failed to completed storage migration\n";
> + die "Failed to complete storage migration\n";
>   } else {
>   foreach my $target_drive (keys %{$self->{target_drive}}) {
>   my $drive = PVE::QemuServer::parse_drive($target_drive, 
> $self->{target_drive}->{$target_drive}->{volid});
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH container] add 'lock' as a fastplug option

2019-10-28 Thread Thomas Lamprecht
On 10/24/19 3:58 PM, Oguz Bektas wrote:
> lock option needs to be fastpluggable when modifying with 'pct set'.
> otherwise it registers as a pending change.
> 
> Signed-off-by: Oguz Bektas 
> ---
>  src/PVE/LXC/Config.pm | 1 +
>  1 file changed, 1 insertion(+)
> 

applied, thanks!

As said, IMO the allowing to set the lock via API still feels a bit
wrong, but that's orthogonal to your patch, which just improves how
that is handled. For a future PVE 7.0 we should maybe stop exposing
the lock property as writable in the API, for both QEMU and Container.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 11/11] refactor: vm_mon_cmd was moved to PVE::QMP

2019-10-28 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 PVE/Service/pvestatd.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index bad1b73d..d8c86886 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -18,6 +18,7 @@ use PVE::Network;
 use PVE::Cluster qw(cfs_read_file);
 use PVE::Storage;
 use PVE::QemuServer;
+use PVE::QMP;
 use PVE::LXC;
 use PVE::LXC::Config;
 use PVE::RPCEnvironment;
@@ -180,7 +181,7 @@ sub auto_balloning {
if ($absdiff > 0) {
&$log("BALLOON $vmid to $res->{$vmid} ($diff)\n");
eval {
-   PVE::QemuServer::vm_mon_cmd($vmid, "balloon", 
+   PVE::QMP::vm_mon_cmd($vmid, "balloon", 
value => int($res->{$vmid}));
};
warn $@ if $@;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH common 01/11] Make get_host_arch return raw uname entry

2019-10-28 Thread Stefan Reiter
The current version had only one user in LXC, so move the LXC-specific
code there to reuse this in QemuServer.

Also cache, since the host's architecture can't change during runtime.

Signed-off-by: Stefan Reiter 
---
 src/PVE/Tools.pm | 17 +
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
index 550da09..c9d37ec 100644
--- a/src/PVE/Tools.pm
+++ b/src/PVE/Tools.pm
@@ -47,6 +47,7 @@ safe_print
 trim
 extract_param
 file_copy
+get_host_arch
 O_PATH
 O_TMPFILE
 );
@@ -1630,18 +1631,10 @@ sub readline_nointr {
 return $line;
 }
 
-sub get_host_arch {
-
-my @uname = POSIX::uname();
-my $machine = $uname[4];
-
-if ($machine eq 'x86_64') {
-   return 'amd64';
-} elsif ($machine eq 'aarch64') {
-   return 'arm64';
-} else {
-   die "unsupported host architecture '$machine'\n";
-}
+my $host_arch;
+sub get_host_arch() {
+$host_arch = (POSIX::uname())[4] if !$host_arch;
+return $host_arch;
 }
 
 # Devices are: [ (12 bits minor) (12 bits major) (8 bits minor) ]
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH ha-manager 08/11] refactor: check_running was moved to PVE::QemuConfig

2019-10-28 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 src/PVE/HA/Resources/PVEVM.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/HA/Resources/PVEVM.pm b/src/PVE/HA/Resources/PVEVM.pm
index 0a37cf6..3a4c07a 100644
--- a/src/PVE/HA/Resources/PVEVM.pm
+++ b/src/PVE/HA/Resources/PVEVM.pm
@@ -123,7 +123,7 @@ sub check_running {
 
 my $nodename = $haenv->nodename();
 
-if (PVE::QemuServer::check_running($vmid, 1, $nodename)) {
+if (PVE::QemuConfig::check_running($vmid, 1, $nodename)) {
# do not count VMs which are suspended for a backup job as running
my $conf = PVE::QemuConfig->load_config($vmid, $nodename);
if (defined($conf->{lock}) && $conf->{lock} eq 'backup') {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container 02/11] Move LXC-specific architecture translation here

2019-10-28 Thread Stefan Reiter
This is the only time we need to do this translation, moving it here
allows reuse of the PVE::Tools function.

Signed-off-by: Stefan Reiter 
---
 src/PVE/LXC/Setup.pm | 9 +
 1 file changed, 9 insertions(+)

diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
index 845aced..ae42a10 100644
--- a/src/PVE/LXC/Setup.pm
+++ b/src/PVE/LXC/Setup.pm
@@ -293,6 +293,15 @@ sub pre_start_hook {
 
 my $host_arch = PVE::Tools::get_host_arch();
 
+# containers use different architecture names
+if ($host_arch eq 'x86_64') {
+   $host_arch = 'amd64';
+} elsif ($host_arch eq 'aarch64') {
+   $host_arch 'arm64';
+} else {
+   die "unsupported host architecture '$host_arch'\n";
+}
+
 my $container_arch = $self->{conf}->{arch};
 
 $container_arch = 'amd64' if $container_arch eq 'i386'; # always use 64 
bit version
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 03/11] Use get_host_arch from PVE::Tools

2019-10-28 Thread Stefan Reiter
...now that it no longer does LXC-specific stuff. Removes a FIXME.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer.pm | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b635760..9af690a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -36,7 +36,7 @@ use PVE::SafeSyslog;
 use PVE::Storage;
 use PVE::SysFSTools;
 use PVE::Systemd;
-use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
dir_glob_foreach $IPV6RE);
+use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
dir_glob_foreach get_host_arch $IPV6RE);
 
 use PVE::QMPClient;
 use PVE::QemuConfig;
@@ -3417,12 +3417,6 @@ sub vga_conf_has_spice {
 return $1 || 1;
 }
 
-my $host_arch; # FIXME: fix PVE::Tools::get_host_arch
-sub get_host_arch() {
-$host_arch = (POSIX::uname())[4] if !$host_arch;
-return $host_arch;
-}
-
 sub is_native($) {
 my ($arch) = @_;
 return get_host_arch() eq $arch;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 07/11] refactor: extract QEMU machine related helpers to package

2019-10-28 Thread Stefan Reiter
...PVE::QemuServer::Machine.

qemu_machine_feature_enabled is exported since it has a *lot* of users
in PVE::QemuServer and a long enough name as it is.

Signed-off-by: Stefan Reiter 
---

Not sure if PVE::QemuMachine wouldn't be a better package name. I'm fine with
both (or other suggestions), if someone has preferences.

 PVE/QemuConfig.pm |   3 +-
 PVE/QemuMigrate.pm|   3 +-
 PVE/QemuServer.pm | 101 +++---
 PVE/QemuServer/Machine.pm | 100 +
 PVE/QemuServer/Makefile   |   1 +
 PVE/VZDump/QemuServer.pm  |   3 +-
 6 files changed, 115 insertions(+), 96 deletions(-)
 create mode 100644 PVE/QemuServer/Machine.pm

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index bcad9c8..0a7f1ab 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -10,6 +10,7 @@ use PVE::INotify;
 use PVE::ProcFSTools;
 use PVE::QemuSchema;
 use PVE::QemuServer;
+use PVE::QemuServer::Machine;
 use PVE::QMP qw(vm_mon_cmd vm_mon_cmd_nocheck);
 use PVE::Storage;
 use PVE::Tools;
@@ -149,7 +150,7 @@ sub __snapshot_save_vmstate {
 $name .= ".raw" if $scfg->{path}; # add filename extension for file base 
storage
 
 my $statefile = PVE::Storage::vdisk_alloc($storecfg, $target, $vmid, 
'raw', $name, $size*1024);
-my $runningmachine = PVE::QemuServer::get_current_qemu_machine($vmid);
+my $runningmachine = 
PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
 
 if ($suspend) {
$conf->{vmstate} = $statefile;
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index aea7eac..9ac78f8 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -12,6 +12,7 @@ use PVE::Cluster;
 use PVE::Storage;
 use PVE::QemuConfig;
 use PVE::QemuServer;
+use PVE::QemuServer::Machine;
 use PVE::QMP qw(vm_mon_cmd vm_mon_cmd_nocheck);
 use Time::HiRes qw( usleep );
 use PVE::RPCEnvironment;
@@ -217,7 +218,7 @@ sub prepare {
die "can't migrate running VM without --online\n" if !$online;
$running = $pid;
 
-   $self->{forcemachine} = PVE::QemuServer::qemu_machine_pxe($vmid, $conf);
+   $self->{forcemachine} = 
PVE::QemuServer::Machine::qemu_machine_pxe($vmid, $conf);
 
 }
 my $loc_res = PVE::QemuServer::check_local_resources($conf, 1);
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index bf696a7..e026a10 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -43,6 +43,7 @@ use PVE::QMPClient;
 use PVE::QemuConfig;
 use PVE::QemuSchema;
 use PVE::QemuServer::Cloudinit;
+use PVE::QemuServer::Machine qw(qemu_machine_feature_enabled);
 use PVE::QemuServer::Memory;
 use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr 
print_pcie_root_port);
 use PVE::QemuServer::USB qw(parse_usb_device);
@@ -1828,20 +1829,14 @@ sub path_is_scsi {
 return $res;
 }
 
-sub machine_type_is_q35 {
-my ($conf) = @_;
-
-return $conf->{machine} && ($conf->{machine} =~ m/q35/) ? 1 : 0;
-}
-
 sub print_tabletdevice_full {
 my ($conf, $arch) = @_;
 
-my $q35 = machine_type_is_q35($conf);
+my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
 
 # we use uhci for old VMs because tablet driver was buggy in older qemu
 my $usbbus;
-if (machine_type_is_q35($conf) || $arch eq 'aarch64') {
+if (PVE::QemuServer::Machine::machine_type_is_q35($conf) || $arch eq 
'aarch64') {
$usbbus = 'ehci';
 } else {
$usbbus = 'uhci';
@@ -2190,7 +2185,7 @@ sub print_vga_device {
$memory = ",ram_size=67108864,vram_size=33554432";
 }
 
-my $q35 = machine_type_is_q35($conf);
+my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
 my $vgaid = "vga" . ($id // '');
 my $pciaddr;
 
@@ -3479,7 +3474,7 @@ sub config_to_command {
 
 die "detected old qemu-kvm binary ($kvmver)\n" if $vernum < 15000;
 
-my $q35 = machine_type_is_q35($conf);
+my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
 my $hotplug_features = parse_hotplug_features(defined($conf->{hotplug}) ? 
$conf->{hotplug} : '1');
 my $use_old_bios_files = undef;
 ($use_old_bios_files, $machine_type) = 
qemu_use_old_bios_files($machine_type);
@@ -4113,7 +4108,7 @@ sub vm_devices_list {
 sub vm_deviceplug {
 my ($storecfg, $conf, $vmid, $deviceid, $device, $arch, $machine_type) = 
@_;
 
-my $q35 = machine_type_is_q35($conf);
+my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
 
 my $devices_list = vm_devices_list($vmid);
 return 1 if defined($devices_list->{$deviceid});
@@ -4189,7 +4184,7 @@ sub vm_deviceplug {
 
return undef if !qemu_netdevadd($vmid, $conf, $arch, $device, 
$deviceid);
 
-   my $machine_type = PVE::QemuServer::qemu_machine_pxe($vmid, $conf);
+   my $machine_type = PVE::QemuServer::Machine::qemu_machine_pxe($vmid, 
$conf);
my $use_old_bios_files = undef;
($use_old_bios_files, $machine_type) = 
qemu_use_old_bios_files($machine_type);
 
@@ -4503,7 +4498,7 @@ sub qemu_usb_hotplug {
 

[pve-devel] [PATCH 00/11] Refactor QemuServer to avoid dependency cycles

2019-10-28 Thread Stefan Reiter
First 3 patches are independant refactorings around get_host_arch.

Rest of the series refactors QemuServer and creates three new packages:
* 'PVE::QemuSchema' for schema related code and common directory creation
* 'PVE::QMP' for higher-level QMP functions
* 'PVE::QemuServer::Machine' for QEMU machine-type related helpers

This refactoring came along because qemu_machine_feature_enabled needs to be
used in 'PVE::QemuServer::CPUConfig', a new package that will be introduced with
my custom CPU series [0]. This would currently require dependency cycles, but by
extracting the code in this series and splitting it up into multiple helper
modules, this can be avoided.

Care was taken not to introduce new dependecy cycles, though this required to
move the 'check_running' function to QemuConfig.pm, where it doesn't *quite* fit
IMO, but I also didn't want to create a new module just for this one function.
Open for ideas ofc.

[0] https://pve.proxmox.com/pipermail/pve-devel/2019-October/039608.html

(@Thomas: I rebased the series just before sending to work with your cleanups)


common: Stefan Reiter (1):
  Make get_host_arch return raw uname entry

 src/PVE/Tools.pm | 17 +
 1 file changed, 5 insertions(+), 12 deletions(-)

container: Stefan Reiter (1):
  Move LXC-specific architecture translation here

 src/PVE/LXC/Setup.pm | 9 +
 1 file changed, 9 insertions(+)

qemu-server: Stefan Reiter (5):
  Use get_host_arch from PVE::Tools
  refactor: create QemuSchema and move file/dir code
  refactor: Move check_running to QemuConfig
  refactor: create PVE::QMP for high-level QMP access
  refactor: extract QEMU machine related helpers to package

 PVE/API2/Qemu.pm |  45 +++---
 PVE/API2/Qemu/Agent.pm   |   7 +-
 PVE/CLI/qm.pm|  27 ++--
 PVE/Makefile |   4 +-
 PVE/QMP.pm   |  71 +
 PVE/QMPClient.pm |   5 +-
 PVE/QemuConfig.pm|  92 +--
 PVE/QemuMigrate.pm   |  27 ++--
 PVE/QemuSchema.pm|  35 +
 PVE/QemuServer.pm| 294 ---
 PVE/QemuServer/Agent.pm  |   6 +-
 PVE/QemuServer/ImportDisk.pm |   3 +-
 PVE/QemuServer/Machine.pm| 100 
 PVE/QemuServer/Makefile  |   1 +
 PVE/QemuServer/Memory.pm |  12 +-
 PVE/VZDump/QemuServer.pm |  23 +--
 test/snapshot-test.pm|  21 ++-
 17 files changed, 419 insertions(+), 354 deletions(-)
 create mode 100644 PVE/QMP.pm
 create mode 100644 PVE/QemuSchema.pm
 create mode 100644 PVE/QemuServer/Machine.pm

ha-manager: Stefan Reiter (2):
  refactor: check_running was moved to PVE::QemuConfig
  refactor: vm_qmp_command was moved to PVE::QMP

 src/PVE/HA/Resources/PVEVM.pm | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

manager: Stefan Reiter (2):
  refactor: check_running was moved to QemuConfig
  refactor: vm_mon_cmd was moved to PVE::QMP

 PVE/API2/Nodes.pm   | 6 +++---
 PVE/Service/pvestatd.pm | 3 ++-
 2 files changed, 5 insertions(+), 4 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH ha-manager 09/11] refactor: vm_qmp_command was moved to PVE::QMP

2019-10-28 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 src/PVE/HA/Resources/PVEVM.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/src/PVE/HA/Resources/PVEVM.pm b/src/PVE/HA/Resources/PVEVM.pm
index 3a4c07a..84c23be 100644
--- a/src/PVE/HA/Resources/PVEVM.pm
+++ b/src/PVE/HA/Resources/PVEVM.pm
@@ -11,6 +11,8 @@ BEGIN {
import  PVE::QemuConfig;
require PVE::QemuServer;
import  PVE::QemuServer;
+   require PVE::QMP;
+   import  PVE::QMP;
require PVE::API2::Qemu;
import  PVE::API2::Qemu;
 }
@@ -128,7 +130,7 @@ sub check_running {
my $conf = PVE::QemuConfig->load_config($vmid, $nodename);
if (defined($conf->{lock}) && $conf->{lock} eq 'backup') {
my $qmpstatus = eval {
-   PVE::QemuServer::vm_qmp_command($vmid, { execute => 
'query-status' })
+   PVE::QMP::vm_qmp_command($vmid, { execute => 'query-status' })
};
warn "$@\n" if $@;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 06/11] refactor: create PVE::QMP for high-level QMP access

2019-10-28 Thread Stefan Reiter
...in addition to PVE::QMPClient for low-level.

Also move all references (most with exports, the methods are used a lot
and have unique enough names IMO) and fix tests.

References in __snapshot_create_vol_snapshots_hook (in QemuConfig) is an
exception, as using the exported functions breaks tests.

Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu.pm | 13 
 PVE/API2/Qemu/Agent.pm   |  7 ++--
 PVE/CLI/qm.pm| 11 ---
 PVE/Makefile |  1 +
 PVE/QMP.pm   | 71 
 PVE/QemuConfig.pm| 15 +
 PVE/QemuMigrate.pm   | 21 ++--
 PVE/QemuServer.pm| 65 
 PVE/QemuServer/Agent.pm  |  3 +-
 PVE/QemuServer/Memory.pm |  9 ++---
 PVE/VZDump/QemuServer.pm | 13 
 test/snapshot-test.pm| 18 +++---
 12 files changed, 141 insertions(+), 106 deletions(-)
 create mode 100644 PVE/QMP.pm

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9912e4d..50a0592 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -21,6 +21,7 @@ use PVE::GuestHelpers;
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::QemuMigrate;
+use PVE::QMP qw(vm_mon_cmd vm_qmp_command);
 use PVE::RPCEnvironment;
 use PVE::AccessControl;
 use PVE::INotify;
@@ -1835,8 +1836,8 @@ __PACKAGE__->register_method({
my ($ticket, undef, $remote_viewer_config) =
PVE::AccessControl::remote_viewer_config($authuser, $vmid, $node, 
$proxy, $title, $port);
 
-   PVE::QemuServer::vm_mon_cmd($vmid, "set_password", protocol => 'spice', 
password => $ticket);
-   PVE::QemuServer::vm_mon_cmd($vmid, "expire_password", protocol => 
'spice', time => "+30");
+   vm_mon_cmd($vmid, "set_password", protocol => 'spice', password => 
$ticket);
+   vm_mon_cmd($vmid, "expire_password", protocol => 'spice', time => 
"+30");
 
return $remote_viewer_config;
 }});
@@ -2261,7 +2262,7 @@ __PACKAGE__->register_method({
# checking the qmp status here to get feedback to the gui/cli/api
# and the status query should not take too long
my $qmpstatus = eval {
-   PVE::QemuServer::vm_qmp_command($vmid, { execute => "query-status" 
}, 0);
+   vm_qmp_command($vmid, { execute => "query-status" }, 0);
};
my $err = $@ if $@;
 
@@ -2341,7 +2342,7 @@ __PACKAGE__->register_method({
my $vmid = extract_param($param, 'vmid');
 
my $qmpstatus = eval {
-   PVE::QemuServer::vm_qmp_command($vmid, { execute => "query-status" 
}, 0);
+   vm_qmp_command($vmid, { execute => "query-status" }, 0);
};
my $err = $@ if $@;
 
@@ -3093,7 +3094,7 @@ __PACKAGE__->register_method({
PVE::QemuConfig->write_config($vmid, $conf);
 
if ($running && 
PVE::QemuServer::parse_guest_agent($conf)->{fstrim_cloned_disks} && 
PVE::QemuServer::qga_check_running($vmid)) {
-   eval { PVE::QemuServer::vm_mon_cmd($vmid, 
"guest-fstrim"); };
+   eval { vm_mon_cmd($vmid, "guest-fstrim"); };
}
 
eval {
@@ -3449,7 +3450,7 @@ __PACKAGE__->register_method({
 
my $res = '';
eval {
-   $res = PVE::QemuServer::vm_human_monitor_command($vmid, 
$param->{command});
+   $res = PVE::QMP::vm_human_monitor_command($vmid, $param->{command});
};
$res = "ERROR: $@" if $@;
 
diff --git a/PVE/API2/Qemu/Agent.pm b/PVE/API2/Qemu/Agent.pm
index 839146c..da7111e 100644
--- a/PVE/API2/Qemu/Agent.pm
+++ b/PVE/API2/Qemu/Agent.pm
@@ -7,6 +7,7 @@ use PVE::RESTHandler;
 use PVE::JSONSchema qw(get_standard_option);
 use PVE::QemuServer;
 use PVE::QemuServer::Agent qw(agent_available agent_cmd check_agent_error);
+use PVE::QMP qw(vm_mon_cmd);
 use MIME::Base64 qw(encode_base64 decode_base64);
 use JSON;
 
@@ -190,7 +191,7 @@ sub register_command {
agent_available($vmid, $conf);
 
my $cmd = $param->{command} // $command;
-   my $res = PVE::QemuServer::vm_mon_cmd($vmid, "guest-$cmd");
+   my $res = vm_mon_cmd($vmid, "guest-$cmd");
 
return { result => $res };
}});
@@ -415,7 +416,7 @@ __PACKAGE__->register_method({
my $content = "";
 
while ($bytes_left > 0 && !$eof) {
-   my $read = PVE::QemuServer::vm_mon_cmd($vmid, "guest-file-read", 
handle => $qgafh, count => int($read_size));
+   my $read = vm_mon_cmd($vmid, "guest-file-read", handle => $qgafh, 
count => int($read_size));
check_agent_error($read, "can't read from file");
 
$content .= decode_base64($read->{'buf-b64'});
@@ -423,7 +424,7 @@ __PACKAGE__->register_method({
$eof = $read->{eof} // 0;
}
 
-   my $res = PVE::QemuServer::vm_mon_cmd($vmid, "guest-file-close", handle 
=> $qgafh);
+   my $res = vm_mon_cmd($vmid, "guest-file-close", handle => $qgafh);
check_agent_error($res, "can't close fi

[pve-devel] [PATCH manager 10/11] refactor: check_running was moved to QemuConfig

2019-10-28 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 PVE/API2/Nodes.pm | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 9e731e05..0f30a518 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -1729,7 +1729,7 @@ __PACKAGE__->register_method ({
} elsif ($d->{type} eq 'qemu') {
$typeText = 'VM';
$default_delay = 3; # to reduce load
-   return if PVE::QemuServer::check_running($vmid, 1);
+   return if PVE::QemuConfig::check_running($vmid, 1);
print STDERR "Starting VM $vmid\n";
$upid = PVE::API2::Qemu->vm_start({node => 
$nodename, vmid => $vmid });
} else {
@@ -1775,7 +1775,7 @@ my $create_stop_worker = sub {
$upid = PVE::API2::LXC::Status->vm_shutdown({node => $nodename, vmid => 
$vmid,
 timeout => $timeout, forceStop => 
1 });
 } elsif ($type eq 'qemu') {
-   return if !PVE::QemuServer::check_running($vmid, 1);
+   return if !PVE::QemuConfig::check_running($vmid, 1);
my $timeout =  defined($down_timeout) ? int($down_timeout) : 60*3;
print STDERR "Stopping VM $vmid (timeout = $timeout seconds)\n";
$upid = PVE::API2::Qemu->vm_shutdown({node => $nodename, vmid => $vmid,
@@ -1894,7 +1894,7 @@ my $create_migrate_worker = sub {
$upid = PVE::API2::LXC->migrate_vm({node => $nodename, vmid => $vmid, 
target => $target,
restart => $online });
 } elsif ($type eq 'qemu') {
-   my $online = PVE::QemuServer::check_running($vmid, 1) ? 1 : 0;
+   my $online = PVE::QemuConfig::check_running($vmid, 1) ? 1 : 0;
print STDERR "Migrating VM $vmid\n";
$upid = PVE::API2::Qemu->migrate_vm({node => $nodename, vmid => $vmid, 
target => $target,
 online => $online });
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 04/11] refactor: create QemuSchema and move file/dir code

2019-10-28 Thread Stefan Reiter
Also merge the 'mkdir's from QemuServer and QemuConfig to reduce
duplication (both modules depend on QemuSchema anyway).

nodename() is still called in multiple modules, but since it's cached by
the INotify module it doesn't really matter.

Signed-off-by: Stefan Reiter 
---

QemuSchema is pretty small right now, but it could hold much more of the static
setup code from QemuServer.pm (JSONSchema formats and the like). This patch only
moves the necessary stuff for the rest of the series to not need cyclic depends.

I want to refactor more into this in the future, but for now I'd like to wait
for my CPU series, since that also touches some schema stuff.

 PVE/CLI/qm.pm |  3 ++-
 PVE/Makefile  |  3 ++-
 PVE/QMPClient.pm  |  5 +++--
 PVE/QemuConfig.pm | 10 ++
 PVE/QemuSchema.pm | 35 +++
 PVE/QemuServer.pm | 41 -
 6 files changed, 52 insertions(+), 45 deletions(-)
 create mode 100644 PVE/QemuSchema.pm

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index ea74ad5..44beac9 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -21,6 +21,7 @@ use PVE::RPCEnvironment;
 use PVE::Exception qw(raise_param_exc);
 use PVE::Network;
 use PVE::GuestHelpers;
+use PVE::QemuSchema;
 use PVE::QemuServer;
 use PVE::QemuServer::ImportDisk;
 use PVE::QemuServer::OVF;
@@ -209,7 +210,7 @@ __PACKAGE__->register_method ({
my ($param) = @_;
 
my $vmid = $param->{vmid};
-   my $vnc_socket = PVE::QemuServer::vnc_socket($vmid);
+   my $vnc_socket = PVE::QemuSchema::vnc_socket($vmid);
 
if (my $ticket = $ENV{LC_PVE_TICKET}) {  # NOTE: ssh on debian only 
pass LC_* variables
PVE::QemuServer::vm_mon_cmd($vmid, "change", device => 'vnc', 
target => "unix:$vnc_socket,password");
diff --git a/PVE/Makefile b/PVE/Makefile
index dc17368..5ec715e 100644
--- a/PVE/Makefile
+++ b/PVE/Makefile
@@ -2,7 +2,8 @@ PERLSOURCE =\
QemuServer.pm   \
QemuMigrate.pm  \
QMPClient.pm\
-   QemuConfig.pm
+   QemuConfig.pm   \
+   QemuSchema.pm   \
 
 .PHONY: install
 install:
diff --git a/PVE/QMPClient.pm b/PVE/QMPClient.pm
index 570dba2..188c6d7 100644
--- a/PVE/QMPClient.pm
+++ b/PVE/QMPClient.pm
@@ -2,6 +2,7 @@ package PVE::QMPClient;
 
 use strict;
 use warnings;
+use PVE::QemuSchema;
 use PVE::QemuServer;
 use IO::Multiplex;
 use POSIX qw(EINTR EAGAIN);
@@ -58,7 +59,7 @@ my $push_cmd_to_queue = sub {
 
 my $qga = ($execute =~ /^guest\-+/) ? 1 : 0;
 
-my $sname = PVE::QemuServer::qmp_socket($vmid, $qga);
+my $sname = PVE::QemuSchema::qmp_socket($vmid, $qga);
 
 $self->{queue_info}->{$sname} = { qga => $qga, vmid => $vmid, sname => 
$sname, cmds => [] }
 if !$self->{queue_info}->{$sname};
@@ -186,7 +187,7 @@ my $open_connection = sub {
 my $vmid = $queue_info->{vmid};
 my $qga = $queue_info->{qga};
 
-my $sname = PVE::QemuServer::qmp_socket($vmid, $qga);
+my $sname = PVE::QemuSchema::qmp_socket($vmid, $qga);
 
 $timeout = 1 if !$timeout;
 
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index e9796a3..b63e57c 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use PVE::AbstractConfig;
 use PVE::INotify;
+use PVE::QemuSchema;
 use PVE::QemuServer;
 use PVE::Storage;
 use PVE::Tools;
@@ -13,13 +14,6 @@ use base qw(PVE::AbstractConfig);
 
 my $nodename = PVE::INotify::nodename();
 
-mkdir "/etc/pve/nodes/$nodename";
-my $confdir = "/etc/pve/nodes/$nodename/qemu-server";
-mkdir $confdir;
-
-my $lock_dir = "/var/lock/qemu-server";
-mkdir $lock_dir;
-
 my $MAX_UNUSED_DISKS = 256;
 
 # BEGIN implemented abstract methods from PVE::AbstractConfig
@@ -37,7 +31,7 @@ sub __config_max_unused_disks {
 sub config_file_lock {
 my ($class, $vmid) = @_;
 
-return "$lock_dir/lock-$vmid.conf";
+return "$PVE::QemuSchema::lock_dir/lock-$vmid.conf";
 }
 
 sub cfs_config_path {
diff --git a/PVE/QemuSchema.pm b/PVE/QemuSchema.pm
new file mode 100644
index 000..446177d
--- /dev/null
+++ b/PVE/QemuSchema.pm
@@ -0,0 +1,35 @@
+package PVE::QemuSchema;
+
+use strict;
+use warnings;
+
+use PVE::INotify;
+
+my $nodename = PVE::INotify::nodename();
+mkdir "/etc/pve/nodes/$nodename";
+my $confdir = "/etc/pve/nodes/$nodename/qemu-server";
+mkdir $confdir;
+
+our $var_run_tmpdir = "/var/run/qemu-server";
+mkdir $var_run_tmpdir;
+
+our $lock_dir = "/var/lock/qemu-server";
+mkdir $lock_dir;
+
+sub qmp_socket {
+my ($vmid, $qga) = @_;
+my $sockettype = $qga ? 'qga' : 'qmp';
+return "${var_run_tmpdir}/$vmid.$sockettype";
+}
+
+sub pidfile_name {
+my ($vmid) = @_;
+return "${var_run_tmpdir}/$vmid.pid";
+}
+
+sub vnc_socket {
+my ($vmid) = @_;
+return "${var_run_tmpdir}/$vmid.vnc";
+}
+
+1;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9af690a..817394e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -40,6 +40,7 @@ use PVE::Tools qw(run_com

[pve-devel] [PATCH qemu-server 05/11] refactor: Move check_running to QemuConfig

2019-10-28 Thread Stefan Reiter
Also move check_cmdline, since check_running is its only user. Changes
all uses of check_running in QemuServer, including mocking in snapshot
tests.

Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu.pm | 32 +++---
 PVE/CLI/qm.pm| 13 +++---
 PVE/QemuConfig.pm| 64 ++-
 PVE/QemuMigrate.pm   |  3 +-
 PVE/QemuServer.pm| 85 ++--
 PVE/QemuServer/Agent.pm  |  3 +-
 PVE/QemuServer/ImportDisk.pm |  3 +-
 PVE/QemuServer/Memory.pm |  3 +-
 PVE/VZDump/QemuServer.pm |  7 +--
 test/snapshot-test.pm|  7 +--
 10 files changed, 115 insertions(+), 105 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b2c0b0d..9912e4d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -556,7 +556,7 @@ __PACKAGE__->register_method({
 
PVE::QemuConfig->check_protection($conf, $emsg);
 
-   die "$emsg vm is running\n" if 
PVE::QemuServer::check_running($vmid);
+   die "$emsg vm is running\n" if 
PVE::QemuConfig::check_running($vmid);
 
my $realcmd = sub {
PVE::QemuServer::restore_archive($archive, $vmid, $authuser, {
@@ -1220,7 +1220,7 @@ my $update_vm_api  = sub {
 
return if !scalar(keys %{$conf->{pending}});
 
-   my $running = PVE::QemuServer::check_running($vmid);
+   my $running = PVE::QemuConfig::check_running($vmid);
 
# apply pending changes
 
@@ -1439,7 +1439,7 @@ __PACKAGE__->register_method({
 
# early tests (repeat after locking)
die "VM $vmid is running - destroy failed\n"
-   if PVE::QemuServer::check_running($vmid);
+   if PVE::QemuConfig::check_running($vmid);
 
my $realcmd = sub {
my $upid = shift;
@@ -1447,7 +1447,7 @@ __PACKAGE__->register_method({
syslog('info', "destroy VM $vmid: $upid\n");
PVE::QemuConfig->lock_config($vmid, sub {
die "VM $vmid is running - destroy failed\n"
-   if (PVE::QemuServer::check_running($vmid));
+   if (PVE::QemuConfig::check_running($vmid));
 
PVE::QemuServer::destroy_vm($storecfg, $vmid, 1, $skiplock);
 
@@ -2179,7 +2179,7 @@ __PACKAGE__->register_method({
raise_param_exc({ skiplock => "Only root may use this option." })
if $skiplock && $authuser ne 'root@pam';
 
-   die "VM $vmid not running\n" if !PVE::QemuServer::check_running($vmid);
+   die "VM $vmid not running\n" if !PVE::QemuConfig::check_running($vmid);
 
my $realcmd = sub {
my $upid = shift;
@@ -2349,7 +2349,7 @@ __PACKAGE__->register_method({
die "VM is paused - cannot shutdown\n";
}
 
-   die "VM $vmid not running\n" if !PVE::QemuServer::check_running($vmid);
+   die "VM $vmid not running\n" if !PVE::QemuConfig::check_running($vmid);
 
my $realcmd = sub {
my $upid = shift;
@@ -2413,7 +2413,7 @@ __PACKAGE__->register_method({
raise_param_exc({ skiplock => "Only root may use this option." })
if $skiplock && $authuser ne 'root@pam';
 
-   die "VM $vmid not running\n" if !PVE::QemuServer::check_running($vmid);
+   die "VM $vmid not running\n" if !PVE::QemuConfig::check_running($vmid);
 
die "Cannot suspend HA managed VM to disk\n"
if $todisk && PVE::HA::Config::vm_is_ha_managed($vmid);
@@ -2482,7 +2482,7 @@ __PACKAGE__->register_method({
};
 
die "VM $vmid not running\n"
-   if !$to_disk_suspended && !PVE::QemuServer::check_running($vmid, 
$nocheck);
+   if !$to_disk_suspended && !PVE::QemuConfig::check_running($vmid, 
$nocheck);
 
my $realcmd = sub {
my $upid = shift;
@@ -2592,7 +2592,7 @@ __PACKAGE__->register_method({
 
my $feature = extract_param($param, 'feature');
 
-   my $running = PVE::QemuServer::check_running($vmid);
+   my $running = PVE::QemuConfig::check_running($vmid);
 
my $conf = PVE::QemuConfig->load_config($vmid);
 
@@ -2739,7 +2739,7 @@ __PACKAGE__->register_method({
 
 PVE::Cluster::check_cfs_quorum();
 
-   my $running = PVE::QemuServer::check_running($vmid) || 0;
+   my $running = PVE::QemuConfig::check_running($vmid) || 0;
 
# exclusive lock if VM is running - else shared lock is enough;
my $shared_lock = $running ? 0 : 1;
@@ -2753,7 +2753,7 @@ __PACKAGE__->register_method({
 
PVE::QemuConfig->check_lock($conf);
 
-   my $verify_running = PVE::QemuServer::check_running($vmid) || 0;
+   my $verify_running = PVE::QemuConfig::check_running($vmid) || 0;
 
die "unexpected state change\n" if $verify_running != $running;
 
@@ -3059,7 +3059,7 @@ __PACKAGE__->register_method({
 
PVE::Cluster::log_msg('info', $authuser, "move disk VM $vmid: move 
--disk $disk --storage $storeid");
 
-   my $running = PVE::QemuServer::che

[pve-devel] [PATCH cluster 1/1] change certificate lifetime to two years

2019-10-28 Thread Dominik Csapak
instead of 10 years, to avoid issues with browsers/os that reject
certificates which have a longer lifetime
(e.g. macOs Catalina only accepts max 825 days if issued after july 2019)

Signed-off-by: Dominik Csapak 
---
 data/PVE/Cluster.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm
index 9cb68d8..2b26ff5 100644
--- a/data/PVE/Cluster.pm
+++ b/data/PVE/Cluster.pm
@@ -320,7 +320,7 @@ __EOD
 eval {
# wrap openssl with faketime to prevent bug #904
run_silent_cmd(['faketime', 'yesterday', 'openssl', 'x509', '-req',
-   '-in', $reqfn, '-days', '3650', '-out', $pvessl_cert_fn,
+   '-in', $reqfn, '-days', '730', '-out', $pvessl_cert_fn,
'-CAkey', $pveca_key_fn, '-CA', $pveca_cert_fn,
'-CAserial', $pveca_srl_fn, '-extfile', $cfgfn]);
 };
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager/cluster] improve handling of issued certificates

2019-10-28 Thread Dominik Csapak
this series enabled auto-renewing of our self issued certificates
by checking the expiry time daily with 'pveupdate' and
renewing it if it expires in less than 2 weeks

also reduce the initial lifetime of the certificates to two years

this fixes an issue where some os/browsers (macOs Catalina) would
reject the certificate with the error: 'REVOKED' since
they have now stricter rules for certificates

since other os/browsers will probably also make the rules stricter,
it makes sense to shorten the time

pve-manager:

Dominik Csapak (1):
  renew pve-ssl.pem when it nearly expires

 PVE/CertHelpers.pm |  6 ++
 bin/pveupdate  | 33 +
 2 files changed, 39 insertions(+)

pve-cluster:

Dominik Csapak (1):
  change certificate lifetime to two years

 data/PVE/Cluster.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/1] renew pve-ssl.pem when it nearly expires

2019-10-28 Thread Dominik Csapak
but only if the ca is ours, and the cert is issued by our ca
(by checking the issuer and openssl verify)

this way we can reduce the lifetime of the certs without having
to worry that they ran out

Signed-off-by: Dominik Csapak 
---
 PVE/CertHelpers.pm |  6 ++
 bin/pveupdate  | 33 +
 2 files changed, 39 insertions(+)

diff --git a/PVE/CertHelpers.pm b/PVE/CertHelpers.pm
index 52316aa0..7e088cb9 100644
--- a/PVE/CertHelpers.pm
+++ b/PVE/CertHelpers.pm
@@ -38,6 +38,12 @@ sub cert_path_prefix {
 return "/etc/pve/nodes/${node}/pveproxy-ssl";
 }
 
+sub default_cert_path_prefix {
+my ($node) = @_;
+
+return "/etc/pve/nodes/${node}/pve-ssl";
+}
+
 sub cert_lock {
 my ($timeout, $code, @param) = @_;
 
diff --git a/bin/pveupdate b/bin/pveupdate
index 5a42ce73..10b5c8f0 100755
--- a/bin/pveupdate
+++ b/bin/pveupdate
@@ -15,6 +15,7 @@ use PVE::Cluster;
 use PVE::APLInfo;
 use PVE::SafeSyslog;
 use PVE::RPCEnvironment;
+use PVE::Tools;
 use PVE::API2::Subscription;
 use PVE::API2::APT;
 use PVE::API2::ACME;
@@ -72,6 +73,38 @@ eval {
 };
 syslog ('err', "Renewing ACME certificate failed: $@") if $@;
 
+eval {
+# get CA and check issuer
+my $capath = "/etc/pve/pve-root-ca.pem";
+my $cainfo = PVE::Certificate::get_certificate_info($capath);
+if ($cainfo->{subject} !~ m|/CN=Proxmox Virtual Environment/.*/O=PVE 
Cluster Manager CA|) {
+   die "Root CA is not issued by Proxmox VE";
+}
+
+# get cert and check issuer and chain
+my $certpath = 
PVE::CertHelpers::default_cert_path_prefix($nodename).".pem";
+my $certinfo = PVE::Certificate::get_certificate_info($certpath);
+if ($certinfo->{issuer} ne $cainfo->{subject}) {
+   die "SSL Certificate is not issued by Proxmox VE root CA";
+}
+
+# check if signed by our ca
+
+# TODO
+# replace by low level interface in ssleay if version 1.86 is available
+PVE::Tools::run_command(['/usr/bin/openssl', 'verify', '-CAfile', $capath, 
$certpath]);
+
+# check if expiry is < 2W
+if (PVE::Certificate::check_expiry($certpath, time() + 14*24*60*60)) {
+   # create new certificate
+   my $ip = PVE::Cluster::remote_node_ip($nodename);
+   PVE::Cluster::gen_pve_ssl_cert(1, $nodename, $ip);
+   print "Restarting pveproxy\n";
+   PVE::Tools::run_command(['systemctl', 'reload-or-restart', 'pveproxy']);
+}
+};
+syslog ('err', "Checking/Renewing SSL certificate failed: $@") if $@;
+
 sub cleanup_tasks {
 
 my $taskdir = "/var/log/pve/tasks";
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container 02/11] Move LXC-specific architecture translation here

2019-10-28 Thread Fabian Grünbichler
On October 28, 2019 11:36 am, Stefan Reiter wrote:
> This is the only time we need to do this translation, moving it here
> allows reuse of the PVE::Tools function.
> 
> Signed-off-by: Stefan Reiter 
> ---
>  src/PVE/LXC/Setup.pm | 9 +
>  1 file changed, 9 insertions(+)
> 
> diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
> index 845aced..ae42a10 100644
> --- a/src/PVE/LXC/Setup.pm
> +++ b/src/PVE/LXC/Setup.pm
> @@ -293,6 +293,15 @@ sub pre_start_hook {
>  
>  my $host_arch = PVE::Tools::get_host_arch();
>  
> +# containers use different architecture names
> +if ($host_arch eq 'x86_64') {
> + $host_arch = 'amd64';
> +} elsif ($host_arch eq 'aarch64') {
> + $host_arch 'arm64';

missing '='

> +} else {
> + die "unsupported host architecture '$host_arch'\n";
> +}
> +
>  my $container_arch = $self->{conf}->{arch};
>  
>  $container_arch = 'amd64' if $container_arch eq 'i386'; # always use 64 
> bit version
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container 02/11] Move LXC-specific architecture translation here

2019-10-28 Thread Stefan Reiter

On 10/28/19 11:54 AM, Fabian Grünbichler wrote:

On October 28, 2019 11:36 am, Stefan Reiter wrote:

This is the only time we need to do this translation, moving it here
allows reuse of the PVE::Tools function.

Signed-off-by: Stefan Reiter 
---
  src/PVE/LXC/Setup.pm | 9 +
  1 file changed, 9 insertions(+)

diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
index 845aced..ae42a10 100644
--- a/src/PVE/LXC/Setup.pm
+++ b/src/PVE/LXC/Setup.pm
@@ -293,6 +293,15 @@ sub pre_start_hook {
  
  my $host_arch = PVE::Tools::get_host_arch();
  
+# containers use different architecture names

+if ($host_arch eq 'x86_64') {
+   $host_arch = 'amd64';
+} elsif ($host_arch eq 'aarch64') {
+   $host_arch 'arm64';


missing '='



Yeah I just realized I did my testing on the wrong VM...
I'll fix some things and send a non-broken v2 ASAP.


+} else {
+   die "unsupported host architecture '$host_arch'\n";
+}
+
  my $container_arch = $self->{conf}->{arch};
  
  $container_arch = 'amd64' if $container_arch eq 'i386'; # always use 64 bit version

--
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 2/2] ui: TFA: default to a 160 bit secret

2019-10-28 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
 www/manager6/dc/TFAEdit.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/dc/TFAEdit.js b/www/manager6/dc/TFAEdit.js
index 7d19127d..8f3017f6 100644
--- a/www/manager6/dc/TFAEdit.js
+++ b/www/manager6/dc/TFAEdit.js
@@ -289,7 +289,7 @@ Ext.define('PVE.window.TFAEdit', {
 
randomizeSecret: function() {
var me = this;
-   var rnd = new Uint8Array(16);
+   var rnd = new Uint8Array(32);
window.crypto.getRandomValues(rnd);
var data = '';
rnd.forEach(function(b) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCHSET] less restrictive TFA keys

2019-10-28 Thread Wolfgang Bumiller
This series adds a new format of how we store TFA keys. The reason is
documented in the new format verifier:

# The old format used 16 base32 chars or 40 hex digits. Since they have a 
common subset it's
# hard to distinguish them without the our previous length constraints, so 
add a 'v2' of the
# format to support arbitrary lengths properly:

New secrets are now prefixed with 'v2-', hexadecimals are still
supported by prefixing the secret itself with '0x' (since '0x' is not
actually valid in base32), eg. 'v2-0xbeef00d', otherwise it's base32:
'v2-ASDF2345'

Both old and new formats work, so existing configurations stay intact,
also still-cached js guis will keep working fine.

Tested with AndOTP, FreeOTP & Google Authenticator.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH access-control] api: tfa: use the new 'pve-tfa-secret' format

2019-10-28 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
Introduces a pve-common dependency bump.

 PVE/API2/AccessControl.pm | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/PVE/API2/AccessControl.pm b/PVE/API2/AccessControl.pm
index 9d2da8d..6d0ea82 100644
--- a/PVE/API2/AccessControl.pm
+++ b/PVE/API2/AccessControl.pm
@@ -509,9 +509,7 @@ __PACKAGE__->register_method ({
optional => 1,
description => 'When adding TOTP, the shared secret value.',
type => 'string',
-   # This is what pve-common's PVE::OTP::oath_verify_otp accepts.
-   # Should we move this to pve-common's JSONSchema as a named 
format?
-   pattern => qr/[A-Z2-7=]{16}|[A-Fa-f0-9]{40}/,
+   format => 'pve-tfa-secret',
},
config => {
optional => 1,
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH common 2/2] OTP: support v2 secret format

2019-10-28 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
 src/PVE/OTP.pm | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/src/PVE/OTP.pm b/src/PVE/OTP.pm
index 019076b..070ab59 100644
--- a/src/PVE/OTP.pm
+++ b/src/PVE/OTP.pm
@@ -137,7 +137,13 @@ sub oath_verify_otp {
 foreach my $k (PVE::Tools::split_list($keys)) {
# Note: we generate 3 values to allow small time drift
my $binkey;
-   if ($k =~ /^[A-Z2-7=]{16}$/) {
+   if ($k =~ /^v2-0x([0-9a-fA-F]+)$/) {
+   # v2, hex
+   $binkey = pack('H*', $1);
+   } elsif ($k =~ /^v2-([A-Z2-7=]+)$/) {
+   # v2, base32
+   $binkey = MIME::Base32::decode_rfc3548($1);
+   } elsif ($k =~ /^[A-Z2-7=]{16}$/) {
$binkey = MIME::Base32::decode_rfc3548($k);
} elsif ($k =~ /^[A-Fa-f0-9]{40}$/) {
$binkey = pack('H*', $k);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/2] ui: TFAEdit: use 'v2' secret format

2019-10-28 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
Introduces a pve-access-control dependency bump.

 www/manager6/dc/TFAEdit.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/dc/TFAEdit.js b/www/manager6/dc/TFAEdit.js
index e1c3b658..7d19127d 100644
--- a/www/manager6/dc/TFAEdit.js
+++ b/www/manager6/dc/TFAEdit.js
@@ -233,7 +233,7 @@ Ext.define('PVE.window.TFAEdit', {
var params = {
userid: me.getView().userid,
action: 'new',
-   key: values.secret,
+   key: 'v2-' + values.secret,
config: PVE.Parser.printPropertyString({
type: 'oath',
digits: values.digits,
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH common 1/2] JSONSchema: add pve-tfa-secret option and format

2019-10-28 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
 src/PVE/JSONSchema.pm | 24 
 1 file changed, 24 insertions(+)

diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index db38d44..3712872 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -530,6 +530,30 @@ 
PVE::JSONSchema::register_standard_option('pve-startup-order', {
 typetext => '[[order=]\d+] [,up=\d+] [,down=\d+] ',
 });
 
+register_format('pve-tfa-secret', \&pve_verify_tfa_secret);
+sub pve_verify_tfa_secret {
+my ($key, $noerr) = @_;
+
+# The old format used 16 base32 chars or 40 hex digits. Since they have a 
common subset it's
+# hard to distinguish them without the our previous length constraints, so 
add a 'v2' of the
+# format to support arbitrary lengths properly:
+if ($key =~ /^v2-0x[0-9a-fA-F]{16,128}$/ || # hex
+$key =~ /^v2-[A-Z2-7=]{16,128}$/ || # base32
+$key =~ /^(?:[A-Z2-7=]{16}|[A-Fa-f0-9]{40})$/) # and the old pattern 
copy&pasted
+{
+   return $key;
+}
+
+return undef if $noerr;
+
+die "unable to decode TFA secret\n";
+}
+
+register_standard_option('pve-tfa-secret', {
+description => "A TFA secret, base32 encoded or hexadecimal.",
+type => 'string', format => 'pve-tfa-secret',
+});
+
 sub check_format {
 my ($format, $value, $path) = @_;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v2 1/3] Remove vm_destroy

2019-10-28 Thread Dominic Jäger
This function has been used in one place only into which we inlined its
functionality. Removing it avoids confusion between vm_destroy and vm_destroy.

The whole $importfn is executed in a lock_config_full.
As a consequence, for the inlined code:
1. lock_config is redundant
2. it is not possible that the VM has been started (check_running) in the
meanwhile
Additionally, it is not possible that the "lock" property has been written into
the VM's config file (check_lock) in the meanwhile

Add warning after eval so that it does not go unnoticed if it ever comes into
action.

Signed-off-by: Dominic Jäger 
---
v1->v2:
- Adapt commit message
- Keep $skiplock for readability
- Squash 3/7 "Remove redundant locks" into here
- Squash 5/7 "Remove useless eval" into here:
 Actually the eval is not removed anymore but I added a warning instead

 PVE/CLI/qm.pm |  5 +++--
 PVE/QemuServer.pm | 15 ---
 2 files changed, 3 insertions(+), 17 deletions(-)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index ea74ad5..acafdc0 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -645,7 +645,6 @@ __PACKAGE__->register_method ({
 
# reload after disks entries have been created
$conf = PVE::QemuConfig->load_config($vmid);
-   PVE::QemuConfig->check_lock($conf);
my $firstdisk = PVE::QemuServer::resolve_first_disk($conf);
$conf->{bootdisk} = $firstdisk if $firstdisk;
PVE::QemuConfig->write_config($vmid, $conf);
@@ -654,7 +653,9 @@ __PACKAGE__->register_method ({
my $err = $@;
if ($err) {
my $skiplock = 1;
-   eval { PVE::QemuServer::vm_destroy($storecfg, $vmid, 
$skiplock); };
+   # eval for additional safety in error path
+   eval { PVE::QemuServer::destroy_vm($storecfg, $vmid, undef, 
$skiplock) };
+   warn "Could not destroy VM $vmid: $@" if "$@";
die "import failed - $err";
}
};
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b635760..af0e15a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5943,21 +5943,6 @@ sub vm_sendkey {
 });
 }
 
-sub vm_destroy {
-my ($storecfg, $vmid, $skiplock) = @_;
-
-PVE::QemuConfig->lock_config($vmid, sub {
-
-   my $conf = PVE::QemuConfig->load_config($vmid);
-
-   if (!check_running($vmid)) {
-   destroy_vm($storecfg, $vmid, undef, $skiplock);
-   } else {
-   die "VM $vmid is running - destroy failed\n";
-   }
-});
-}
-
 # vzdump restore implementaion
 
 sub tar_archive_read_firstfile {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v2 3/3] Import OVF: Lock config with "lock" property

2019-10-28 Thread Dominic Jäger
Previously a VMID conflict was possible when creating a VM on another node
between locking the config with lock_config_full and writing to it for the
first time with write_config.

Using create_and_lock_config eliminates this possibility. This means that now
the "lock" property is set in the config instead of using flock only.

$param was empty when it was assigned the three values "name", "memory" and
"cores" before being assigned to $conf later on. Assigning those values
directly to $conf avoids confusion about what the two variables contain.

Signed-off-by: Dominic Jäger 
---
v1->v2:
- Add note about $param in commit message
- Improve commit message, especially replacing "parameter lock" with "lock
  config"
- Remove unnecessary semicolon in one-liner
- Adapted error message
- Use return early pattern

 PVE/CLI/qm.pm | 66 +--
 1 file changed, 32 insertions(+), 34 deletions(-)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 3bf5f97..a441ac1 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -621,47 +621,45 @@ __PACKAGE__->register_method ({
return;
}
 
-   $param->{name} = $parsed->{qm}->{name} if 
defined($parsed->{qm}->{name});
-   $param->{memory} = $parsed->{qm}->{memory} if 
defined($parsed->{qm}->{memory});
-   $param->{cores} = $parsed->{qm}->{cores} if 
defined($parsed->{qm}->{cores});
+   eval { PVE::QemuConfig->create_and_lock_config($vmid, 0) };
+   die "Reserving empty config for OVF import failed: $@" if $@;
 
-   my $importfn = sub {
+   my $conf = PVE::QemuConfig->load_config($vmid);
+   die "Internal error: Expected 'create' lock in config of VM $vmid!"
+   if !PVE::QemuConfig->has_lock($conf, "create");
 
-   PVE::Cluster::check_vmid_unused($vmid);
+   $conf->{name} = $parsed->{qm}->{name} if defined($parsed->{qm}->{name});
+   $conf->{memory} = $parsed->{qm}->{memory} if 
defined($parsed->{qm}->{memory});
+   $conf->{cores} = $parsed->{qm}->{cores} if 
defined($parsed->{qm}->{cores});
 
-   my $conf = $param;
-
-   eval {
-   # order matters, as do_import() will load_config() internally
-   $conf->{vmgenid} = PVE::QemuServer::generate_uuid();
-   $conf->{smbios1} = PVE::QemuServer::generate_smbios1_uuid();
-   PVE::QemuConfig->write_config($vmid, $conf);
-
-   foreach my $disk (@{ $parsed->{disks} }) {
-   my ($file, $drive) = ($disk->{backing_file}, 
$disk->{disk_address});
-   PVE::QemuServer::ImportDisk::do_import($file, $vmid, 
$storeid,
-   0, { drive_name => $drive, format => $format });
-   }
-
-   # reload after disks entries have been created
-   $conf = PVE::QemuConfig->load_config($vmid);
-   my $firstdisk = PVE::QemuServer::resolve_first_disk($conf);
-   $conf->{bootdisk} = $firstdisk if $firstdisk;
-   PVE::QemuConfig->write_config($vmid, $conf);
-   };
+   eval {
+   # order matters, as do_import() will load_config() internally
+   $conf->{vmgenid} = PVE::QemuServer::generate_uuid();
+   $conf->{smbios1} = PVE::QemuServer::generate_smbios1_uuid();
+   PVE::QemuConfig->write_config($vmid, $conf);
 
-   my $err = $@;
-   if ($err) {
-   my $skiplock = 1;
-   # eval for additional safety in error path
-   eval { PVE::QemuServer::destroy_vm($storecfg, $vmid, undef, 
$skiplock) };
-   warn "Could not destroy VM $vmid: $@" if "$@";
-   die "import failed - $err";
+   foreach my $disk (@{ $parsed->{disks} }) {
+   my ($file, $drive) = ($disk->{backing_file}, 
$disk->{disk_address});
+   PVE::QemuServer::ImportDisk::do_import($file, $vmid, $storeid,
+   1, { drive_name => $drive, format => $format });
}
+
+   # reload after disks entries have been created
+   $conf = PVE::QemuConfig->load_config($vmid);
+   my $firstdisk = PVE::QemuServer::resolve_first_disk($conf);
+   $conf->{bootdisk} = $firstdisk if $firstdisk;
+   PVE::QemuConfig->write_config($vmid, $conf);
};
 
-   my $wait_for_lock = 1;
-   PVE::QemuConfig->lock_config_full($vmid, $wait_for_lock, $importfn);
+   my $err = $@;
+   if ($err) {
+   my $skiplock = 1;
+   # eval for additional safety in error path
+   eval { PVE::QemuServer::destroy_vm($storecfg, $vmid, undef, 
$skiplock) };
+   warn "Could not destroy VM $vmid: $@" if "$@";
+   die "import failed - $err";
+   }
+   PVE::QemuConfig->remove_lock ($vmid, "create");
 
return undef;
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-deve

[pve-devel] [PATCH qemu-server v2 0/3] Improve locks and functions for imports

2019-10-28 Thread Dominic Jäger
This series cleans up some redundant locks and functions and sets more
appropriate locks instead when importing .ovf and disks.

Patch 1/7 of the old series has already been applied.
Drop 4/7 "Remove variable from lock" as we plan to apply the patch with
create_and_lock_config and this makes it obsolete

Dominic Jäger (3):
  Remove vm_destroy
  Add skiplock to do_import
  Import OVF: Lock config with "lock" property

 PVE/CLI/qm.pm| 67 ++--
 PVE/QemuServer.pm| 15 
 PVE/QemuServer/ImportDisk.pm |  6 ++--
 3 files changed, 37 insertions(+), 51 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v2 2/3] Add skiplock to do_import

2019-10-28 Thread Dominic Jäger
Functions like qm importovf can now set the "lock" property in a config file
before calling do_import.

Signed-off-by: Dominic Jäger 
---
v1->v2: Edited only the commit message ("parameter lock" -> "lock property")

 PVE/CLI/qm.pm| 4 ++--
 PVE/QemuServer/ImportDisk.pm | 6 --
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index acafdc0..3bf5f97 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -488,7 +488,7 @@ __PACKAGE__->register_method ({
die "storage $storeid does not support vm images\n"
if !$target_storage_config->{content}->{images};
 
-   PVE::QemuServer::ImportDisk::do_import($source, $vmid, $storeid, { 
format => $format });
+   PVE::QemuServer::ImportDisk::do_import($source, $vmid, $storeid, 0, { 
format => $format });
 
return undef;
 }});
@@ -640,7 +640,7 @@ __PACKAGE__->register_method ({
foreach my $disk (@{ $parsed->{disks} }) {
my ($file, $drive) = ($disk->{backing_file}, 
$disk->{disk_address});
PVE::QemuServer::ImportDisk::do_import($file, $vmid, 
$storeid,
-   { drive_name => $drive, format => $format });
+   0, { drive_name => $drive, format => $format });
}
 
# reload after disks entries have been created
diff --git a/PVE/QemuServer/ImportDisk.pm b/PVE/QemuServer/ImportDisk.pm
index 5d391e6..9cae461 100755
--- a/PVE/QemuServer/ImportDisk.pm
+++ b/PVE/QemuServer/ImportDisk.pm
@@ -12,7 +12,7 @@ use PVE::Tools qw(run_command extract_param);
 # $optional->{drive_name} may be used to specify ide0, scsi1, etc ...
 # $optional->{format} may be used to specify qcow2, raw, etc ...
 sub do_import {
-my ($src_path, $vmid, $storage_id, $optional) = @_;
+my ($src_path, $vmid, $storage_id, $skiplock, $optional) = @_;
 
 my $drive_name = extract_param($optional, 'drive_name');
 my $format = extract_param($optional, 'format');
@@ -41,7 +41,9 @@ sub do_import {
 
 my $create_drive = sub {
my $vm_conf = PVE::QemuConfig->load_config($vmid);
-   PVE::QemuConfig->check_lock($vm_conf);
+   if (!$skiplock) {
+   PVE::QemuConfig->check_lock($vm_conf);
+   }
 
if ($drive_name) {
# should never happen as setting $drive_name is not exposed to 
public interface
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 ha-manager 08/11] refactor: check_running was moved to PVE::QemuConfig

2019-10-28 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 src/PVE/HA/Resources/PVEVM.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/HA/Resources/PVEVM.pm b/src/PVE/HA/Resources/PVEVM.pm
index 0a37cf6..3a4c07a 100644
--- a/src/PVE/HA/Resources/PVEVM.pm
+++ b/src/PVE/HA/Resources/PVEVM.pm
@@ -123,7 +123,7 @@ sub check_running {
 
 my $nodename = $haenv->nodename();
 
-if (PVE::QemuServer::check_running($vmid, 1, $nodename)) {
+if (PVE::QemuConfig::check_running($vmid, 1, $nodename)) {
# do not count VMs which are suspended for a backup job as running
my $conf = PVE::QemuConfig->load_config($vmid, $nodename);
if (defined($conf->{lock}) && $conf->{lock} eq 'backup') {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 common 01/11] Make get_host_arch return raw uname entry

2019-10-28 Thread Stefan Reiter
The current version had only one user in LXC, so move the LXC-specific
code there to reuse this in QemuServer.

Also cache, since the host's architecture can't change during runtime.

Signed-off-by: Stefan Reiter 
---
 src/PVE/Tools.pm | 17 +
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
index 550da09..c9d37ec 100644
--- a/src/PVE/Tools.pm
+++ b/src/PVE/Tools.pm
@@ -47,6 +47,7 @@ safe_print
 trim
 extract_param
 file_copy
+get_host_arch
 O_PATH
 O_TMPFILE
 );
@@ -1630,18 +1631,10 @@ sub readline_nointr {
 return $line;
 }
 
-sub get_host_arch {
-
-my @uname = POSIX::uname();
-my $machine = $uname[4];
-
-if ($machine eq 'x86_64') {
-   return 'amd64';
-} elsif ($machine eq 'aarch64') {
-   return 'arm64';
-} else {
-   die "unsupported host architecture '$machine'\n";
-}
+my $host_arch;
+sub get_host_arch() {
+$host_arch = (POSIX::uname())[4] if !$host_arch;
+return $host_arch;
 }
 
 # Devices are: [ (12 bits minor) (12 bits major) (8 bits minor) ]
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 ha-manager 09/11] refactor: vm_qmp_command was moved to PVE::QMP

2019-10-28 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 src/PVE/HA/Resources/PVEVM.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/src/PVE/HA/Resources/PVEVM.pm b/src/PVE/HA/Resources/PVEVM.pm
index 3a4c07a..84c23be 100644
--- a/src/PVE/HA/Resources/PVEVM.pm
+++ b/src/PVE/HA/Resources/PVEVM.pm
@@ -11,6 +11,8 @@ BEGIN {
import  PVE::QemuConfig;
require PVE::QemuServer;
import  PVE::QemuServer;
+   require PVE::QMP;
+   import  PVE::QMP;
require PVE::API2::Qemu;
import  PVE::API2::Qemu;
 }
@@ -128,7 +130,7 @@ sub check_running {
my $conf = PVE::QemuConfig->load_config($vmid, $nodename);
if (defined($conf->{lock}) && $conf->{lock} eq 'backup') {
my $qmpstatus = eval {
-   PVE::QemuServer::vm_qmp_command($vmid, { execute => 
'query-status' })
+   PVE::QMP::vm_qmp_command($vmid, { execute => 'query-status' })
};
warn "$@\n" if $@;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 02/11] Move LXC-specific architecture translation here

2019-10-28 Thread Stefan Reiter
This is the only time we need to do this translation, moving it here
allows reuse of the PVE::Tools function.

Signed-off-by: Stefan Reiter 
---
 src/PVE/LXC/Setup.pm | 9 +
 1 file changed, 9 insertions(+)

diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
index 845aced..ca6fc4f 100644
--- a/src/PVE/LXC/Setup.pm
+++ b/src/PVE/LXC/Setup.pm
@@ -293,6 +293,15 @@ sub pre_start_hook {
 
 my $host_arch = PVE::Tools::get_host_arch();
 
+# containers use different architecture names
+if ($host_arch eq 'x86_64') {
+   $host_arch = 'amd64';
+} elsif ($host_arch eq 'aarch64') {
+   $host_arch = 'arm64';
+} else {
+   die "unsupported host architecture '$host_arch'\n";
+}
+
 my $container_arch = $self->{conf}->{arch};
 
 $container_arch = 'amd64' if $container_arch eq 'i386'; # always use 64 
bit version
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server 04/11] refactor: create QemuSchema and move file/dir code

2019-10-28 Thread Stefan Reiter
Also merge the 'mkdir's from QemuServer and QemuConfig to reduce
duplication (both modules depend on QemuSchema anyway).

nodename() is still called in multiple modules, but since it's cached by
the INotify module it doesn't really matter.

Signed-off-by: Stefan Reiter 
---

QemuSchema is pretty small right now, but it could hold much more of the static
setup code from QemuServer.pm (JSONSchema formats and the like). This patch only
moves the necessary stuff for the rest of the series to not need cyclic depends.

I want to refactor more into this in the future, but for now I'd like to wait
for my CPU series, since that also touches some schema stuff.

 PVE/CLI/qm.pm |  3 ++-
 PVE/Makefile  |  3 ++-
 PVE/QMPClient.pm  |  5 +++--
 PVE/QemuConfig.pm | 10 ++
 PVE/QemuSchema.pm | 35 +++
 PVE/QemuServer.pm | 41 -
 6 files changed, 52 insertions(+), 45 deletions(-)
 create mode 100644 PVE/QemuSchema.pm

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index ea74ad5..44beac9 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -21,6 +21,7 @@ use PVE::RPCEnvironment;
 use PVE::Exception qw(raise_param_exc);
 use PVE::Network;
 use PVE::GuestHelpers;
+use PVE::QemuSchema;
 use PVE::QemuServer;
 use PVE::QemuServer::ImportDisk;
 use PVE::QemuServer::OVF;
@@ -209,7 +210,7 @@ __PACKAGE__->register_method ({
my ($param) = @_;
 
my $vmid = $param->{vmid};
-   my $vnc_socket = PVE::QemuServer::vnc_socket($vmid);
+   my $vnc_socket = PVE::QemuSchema::vnc_socket($vmid);
 
if (my $ticket = $ENV{LC_PVE_TICKET}) {  # NOTE: ssh on debian only 
pass LC_* variables
PVE::QemuServer::vm_mon_cmd($vmid, "change", device => 'vnc', 
target => "unix:$vnc_socket,password");
diff --git a/PVE/Makefile b/PVE/Makefile
index dc17368..5ec715e 100644
--- a/PVE/Makefile
+++ b/PVE/Makefile
@@ -2,7 +2,8 @@ PERLSOURCE =\
QemuServer.pm   \
QemuMigrate.pm  \
QMPClient.pm\
-   QemuConfig.pm
+   QemuConfig.pm   \
+   QemuSchema.pm   \
 
 .PHONY: install
 install:
diff --git a/PVE/QMPClient.pm b/PVE/QMPClient.pm
index 570dba2..188c6d7 100644
--- a/PVE/QMPClient.pm
+++ b/PVE/QMPClient.pm
@@ -2,6 +2,7 @@ package PVE::QMPClient;
 
 use strict;
 use warnings;
+use PVE::QemuSchema;
 use PVE::QemuServer;
 use IO::Multiplex;
 use POSIX qw(EINTR EAGAIN);
@@ -58,7 +59,7 @@ my $push_cmd_to_queue = sub {
 
 my $qga = ($execute =~ /^guest\-+/) ? 1 : 0;
 
-my $sname = PVE::QemuServer::qmp_socket($vmid, $qga);
+my $sname = PVE::QemuSchema::qmp_socket($vmid, $qga);
 
 $self->{queue_info}->{$sname} = { qga => $qga, vmid => $vmid, sname => 
$sname, cmds => [] }
 if !$self->{queue_info}->{$sname};
@@ -186,7 +187,7 @@ my $open_connection = sub {
 my $vmid = $queue_info->{vmid};
 my $qga = $queue_info->{qga};
 
-my $sname = PVE::QemuServer::qmp_socket($vmid, $qga);
+my $sname = PVE::QemuSchema::qmp_socket($vmid, $qga);
 
 $timeout = 1 if !$timeout;
 
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index e9796a3..b63e57c 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use PVE::AbstractConfig;
 use PVE::INotify;
+use PVE::QemuSchema;
 use PVE::QemuServer;
 use PVE::Storage;
 use PVE::Tools;
@@ -13,13 +14,6 @@ use base qw(PVE::AbstractConfig);
 
 my $nodename = PVE::INotify::nodename();
 
-mkdir "/etc/pve/nodes/$nodename";
-my $confdir = "/etc/pve/nodes/$nodename/qemu-server";
-mkdir $confdir;
-
-my $lock_dir = "/var/lock/qemu-server";
-mkdir $lock_dir;
-
 my $MAX_UNUSED_DISKS = 256;
 
 # BEGIN implemented abstract methods from PVE::AbstractConfig
@@ -37,7 +31,7 @@ sub __config_max_unused_disks {
 sub config_file_lock {
 my ($class, $vmid) = @_;
 
-return "$lock_dir/lock-$vmid.conf";
+return "$PVE::QemuSchema::lock_dir/lock-$vmid.conf";
 }
 
 sub cfs_config_path {
diff --git a/PVE/QemuSchema.pm b/PVE/QemuSchema.pm
new file mode 100644
index 000..446177d
--- /dev/null
+++ b/PVE/QemuSchema.pm
@@ -0,0 +1,35 @@
+package PVE::QemuSchema;
+
+use strict;
+use warnings;
+
+use PVE::INotify;
+
+my $nodename = PVE::INotify::nodename();
+mkdir "/etc/pve/nodes/$nodename";
+my $confdir = "/etc/pve/nodes/$nodename/qemu-server";
+mkdir $confdir;
+
+our $var_run_tmpdir = "/var/run/qemu-server";
+mkdir $var_run_tmpdir;
+
+our $lock_dir = "/var/lock/qemu-server";
+mkdir $lock_dir;
+
+sub qmp_socket {
+my ($vmid, $qga) = @_;
+my $sockettype = $qga ? 'qga' : 'qmp';
+return "${var_run_tmpdir}/$vmid.$sockettype";
+}
+
+sub pidfile_name {
+my ($vmid) = @_;
+return "${var_run_tmpdir}/$vmid.pid";
+}
+
+sub vnc_socket {
+my ($vmid) = @_;
+return "${var_run_tmpdir}/$vmid.vnc";
+}
+
+1;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9af690a..817394e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -40,6 +40,7 @@ use PVE::Tools qw(run_com

[pve-devel] [PATCH v2 qemu-server 07/11] refactor: extract QEMU machine related helpers to package

2019-10-28 Thread Stefan Reiter
...PVE::QemuServer::Machine.

qemu_machine_feature_enabled is exported since it has a *lot* of users
in PVE::QemuServer and a long enough name as it is.

Signed-off-by: Stefan Reiter 
---

Not sure if PVE::QemuMachine wouldn't be a better package name. I'm fine with
both (or other suggestions), if someone has preferences.

 PVE/QemuConfig.pm |   3 +-
 PVE/QemuMigrate.pm|   3 +-
 PVE/QemuServer.pm | 101 +++---
 PVE/QemuServer/Machine.pm | 100 +
 PVE/QemuServer/Makefile   |   1 +
 PVE/VZDump/QemuServer.pm  |   3 +-
 6 files changed, 115 insertions(+), 96 deletions(-)
 create mode 100644 PVE/QemuServer/Machine.pm

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 06ace83..e7af9ad 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -11,6 +11,7 @@ use PVE::INotify;
 use PVE::ProcFSTools;
 use PVE::QemuSchema;
 use PVE::QemuServer;
+use PVE::QemuServer::Machine;
 use PVE::QMP qw(vm_mon_cmd vm_mon_cmd_nocheck);
 use PVE::Storage;
 use PVE::Tools;
@@ -150,7 +151,7 @@ sub __snapshot_save_vmstate {
 $name .= ".raw" if $scfg->{path}; # add filename extension for file base 
storage
 
 my $statefile = PVE::Storage::vdisk_alloc($storecfg, $target, $vmid, 
'raw', $name, $size*1024);
-my $runningmachine = PVE::QemuServer::get_current_qemu_machine($vmid);
+my $runningmachine = 
PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
 
 if ($suspend) {
$conf->{vmstate} = $statefile;
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index aea7eac..9ac78f8 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -12,6 +12,7 @@ use PVE::Cluster;
 use PVE::Storage;
 use PVE::QemuConfig;
 use PVE::QemuServer;
+use PVE::QemuServer::Machine;
 use PVE::QMP qw(vm_mon_cmd vm_mon_cmd_nocheck);
 use Time::HiRes qw( usleep );
 use PVE::RPCEnvironment;
@@ -217,7 +218,7 @@ sub prepare {
die "can't migrate running VM without --online\n" if !$online;
$running = $pid;
 
-   $self->{forcemachine} = PVE::QemuServer::qemu_machine_pxe($vmid, $conf);
+   $self->{forcemachine} = 
PVE::QemuServer::Machine::qemu_machine_pxe($vmid, $conf);
 
 }
 my $loc_res = PVE::QemuServer::check_local_resources($conf, 1);
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ed137fc..20a6380 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -42,6 +42,7 @@ use PVE::QMPClient;
 use PVE::QemuConfig;
 use PVE::QemuSchema;
 use PVE::QemuServer::Cloudinit;
+use PVE::QemuServer::Machine qw(qemu_machine_feature_enabled);
 use PVE::QemuServer::Memory;
 use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr 
print_pcie_root_port);
 use PVE::QemuServer::USB qw(parse_usb_device);
@@ -1827,20 +1828,14 @@ sub path_is_scsi {
 return $res;
 }
 
-sub machine_type_is_q35 {
-my ($conf) = @_;
-
-return $conf->{machine} && ($conf->{machine} =~ m/q35/) ? 1 : 0;
-}
-
 sub print_tabletdevice_full {
 my ($conf, $arch) = @_;
 
-my $q35 = machine_type_is_q35($conf);
+my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
 
 # we use uhci for old VMs because tablet driver was buggy in older qemu
 my $usbbus;
-if (machine_type_is_q35($conf) || $arch eq 'aarch64') {
+if (PVE::QemuServer::Machine::machine_type_is_q35($conf) || $arch eq 
'aarch64') {
$usbbus = 'ehci';
 } else {
$usbbus = 'uhci';
@@ -2189,7 +2184,7 @@ sub print_vga_device {
$memory = ",ram_size=67108864,vram_size=33554432";
 }
 
-my $q35 = machine_type_is_q35($conf);
+my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
 my $vgaid = "vga" . ($id // '');
 my $pciaddr;
 
@@ -3478,7 +3473,7 @@ sub config_to_command {
 
 die "detected old qemu-kvm binary ($kvmver)\n" if $vernum < 15000;
 
-my $q35 = machine_type_is_q35($conf);
+my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
 my $hotplug_features = parse_hotplug_features(defined($conf->{hotplug}) ? 
$conf->{hotplug} : '1');
 my $use_old_bios_files = undef;
 ($use_old_bios_files, $machine_type) = 
qemu_use_old_bios_files($machine_type);
@@ -4112,7 +4107,7 @@ sub vm_devices_list {
 sub vm_deviceplug {
 my ($storecfg, $conf, $vmid, $deviceid, $device, $arch, $machine_type) = 
@_;
 
-my $q35 = machine_type_is_q35($conf);
+my $q35 = PVE::QemuServer::Machine::machine_type_is_q35($conf);
 
 my $devices_list = vm_devices_list($vmid);
 return 1 if defined($devices_list->{$deviceid});
@@ -4188,7 +4183,7 @@ sub vm_deviceplug {
 
return undef if !qemu_netdevadd($vmid, $conf, $arch, $device, 
$deviceid);
 
-   my $machine_type = PVE::QemuServer::qemu_machine_pxe($vmid, $conf);
+   my $machine_type = PVE::QemuServer::Machine::qemu_machine_pxe($vmid, 
$conf);
my $use_old_bios_files = undef;
($use_old_bios_files, $machine_type) = 
qemu_use_old_bios_files($machine_type);
 
@@ -4502,7 +4497,7 @@ sub qemu_usb_hotplug {
 

[pve-devel] [PATCH v2 qemu-server 05/11] refactor: Move check_running to QemuConfig

2019-10-28 Thread Stefan Reiter
Also move check_cmdline, since check_running is its only user. Changes
all uses of check_running in QemuServer, including mocking in snapshot
tests.

Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu.pm | 32 +++---
 PVE/CLI/qm.pm| 13 +++---
 PVE/QemuConfig.pm| 65 ++-
 PVE/QemuMigrate.pm   |  3 +-
 PVE/QemuServer.pm| 85 ++--
 PVE/QemuServer/Agent.pm  |  3 +-
 PVE/QemuServer/ImportDisk.pm |  3 +-
 PVE/QemuServer/Memory.pm |  3 +-
 PVE/VZDump/QemuServer.pm |  7 +--
 test/snapshot-test.pm|  7 +--
 10 files changed, 116 insertions(+), 105 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b2c0b0d..9912e4d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -556,7 +556,7 @@ __PACKAGE__->register_method({
 
PVE::QemuConfig->check_protection($conf, $emsg);
 
-   die "$emsg vm is running\n" if 
PVE::QemuServer::check_running($vmid);
+   die "$emsg vm is running\n" if 
PVE::QemuConfig::check_running($vmid);
 
my $realcmd = sub {
PVE::QemuServer::restore_archive($archive, $vmid, $authuser, {
@@ -1220,7 +1220,7 @@ my $update_vm_api  = sub {
 
return if !scalar(keys %{$conf->{pending}});
 
-   my $running = PVE::QemuServer::check_running($vmid);
+   my $running = PVE::QemuConfig::check_running($vmid);
 
# apply pending changes
 
@@ -1439,7 +1439,7 @@ __PACKAGE__->register_method({
 
# early tests (repeat after locking)
die "VM $vmid is running - destroy failed\n"
-   if PVE::QemuServer::check_running($vmid);
+   if PVE::QemuConfig::check_running($vmid);
 
my $realcmd = sub {
my $upid = shift;
@@ -1447,7 +1447,7 @@ __PACKAGE__->register_method({
syslog('info', "destroy VM $vmid: $upid\n");
PVE::QemuConfig->lock_config($vmid, sub {
die "VM $vmid is running - destroy failed\n"
-   if (PVE::QemuServer::check_running($vmid));
+   if (PVE::QemuConfig::check_running($vmid));
 
PVE::QemuServer::destroy_vm($storecfg, $vmid, 1, $skiplock);
 
@@ -2179,7 +2179,7 @@ __PACKAGE__->register_method({
raise_param_exc({ skiplock => "Only root may use this option." })
if $skiplock && $authuser ne 'root@pam';
 
-   die "VM $vmid not running\n" if !PVE::QemuServer::check_running($vmid);
+   die "VM $vmid not running\n" if !PVE::QemuConfig::check_running($vmid);
 
my $realcmd = sub {
my $upid = shift;
@@ -2349,7 +2349,7 @@ __PACKAGE__->register_method({
die "VM is paused - cannot shutdown\n";
}
 
-   die "VM $vmid not running\n" if !PVE::QemuServer::check_running($vmid);
+   die "VM $vmid not running\n" if !PVE::QemuConfig::check_running($vmid);
 
my $realcmd = sub {
my $upid = shift;
@@ -2413,7 +2413,7 @@ __PACKAGE__->register_method({
raise_param_exc({ skiplock => "Only root may use this option." })
if $skiplock && $authuser ne 'root@pam';
 
-   die "VM $vmid not running\n" if !PVE::QemuServer::check_running($vmid);
+   die "VM $vmid not running\n" if !PVE::QemuConfig::check_running($vmid);
 
die "Cannot suspend HA managed VM to disk\n"
if $todisk && PVE::HA::Config::vm_is_ha_managed($vmid);
@@ -2482,7 +2482,7 @@ __PACKAGE__->register_method({
};
 
die "VM $vmid not running\n"
-   if !$to_disk_suspended && !PVE::QemuServer::check_running($vmid, 
$nocheck);
+   if !$to_disk_suspended && !PVE::QemuConfig::check_running($vmid, 
$nocheck);
 
my $realcmd = sub {
my $upid = shift;
@@ -2592,7 +2592,7 @@ __PACKAGE__->register_method({
 
my $feature = extract_param($param, 'feature');
 
-   my $running = PVE::QemuServer::check_running($vmid);
+   my $running = PVE::QemuConfig::check_running($vmid);
 
my $conf = PVE::QemuConfig->load_config($vmid);
 
@@ -2739,7 +2739,7 @@ __PACKAGE__->register_method({
 
 PVE::Cluster::check_cfs_quorum();
 
-   my $running = PVE::QemuServer::check_running($vmid) || 0;
+   my $running = PVE::QemuConfig::check_running($vmid) || 0;
 
# exclusive lock if VM is running - else shared lock is enough;
my $shared_lock = $running ? 0 : 1;
@@ -2753,7 +2753,7 @@ __PACKAGE__->register_method({
 
PVE::QemuConfig->check_lock($conf);
 
-   my $verify_running = PVE::QemuServer::check_running($vmid) || 0;
+   my $verify_running = PVE::QemuConfig::check_running($vmid) || 0;
 
die "unexpected state change\n" if $verify_running != $running;
 
@@ -3059,7 +3059,7 @@ __PACKAGE__->register_method({
 
PVE::Cluster::log_msg('info', $authuser, "move disk VM $vmid: move 
--disk $disk --storage $storeid");
 
-   my $running = PVE::QemuServer::che

[pve-devel] [PATCH v2 00/11] Refactor QemuServer to avoid dependency cycles

2019-10-28 Thread Stefan Reiter
First 3 patches are independant refactorings around get_host_arch.

Rest of the series refactors QemuServer and creates three new packages:
* 'PVE::QemuSchema' for schema related code and common directory creation
* 'PVE::QMP' for higher-level QMP functions
* 'PVE::QemuServer::Machine' for QEMU machine-type related helpers

This refactoring came along because qemu_machine_feature_enabled needs to be
used in 'PVE::QemuServer::CPUConfig', a new package that will be introduced with
my custom CPU series [0]. This would currently require dependency cycles, but by
extracting the code in this series and splitting it up into multiple helper
modules, this can be avoided.

Care was taken not to introduce new dependecy cycles, though this required to
move the 'check_running' function to QemuConfig.pm, where it doesn't *quite* fit
IMO, but I also didn't want to create a new module just for this one function.
Open for ideas ofc.

v2:
* Actually test changes correctly - sorry
* Fix a few package 'use's I missed to move to new packages
* Fix tests for pve-manager
* Fix missing '=' in pve-container

[0] https://pve.proxmox.com/pipermail/pve-devel/2019-October/039608.html

(@Thomas: I rebased the series just before sending to work with your cleanups)


common: Stefan Reiter (1):
  Make get_host_arch return raw uname entry

 src/PVE/Tools.pm | 17 +
 1 file changed, 5 insertions(+), 12 deletions(-)

container: Stefan Reiter (1):
  Move LXC-specific architecture translation here

 src/PVE/LXC/Setup.pm | 9 +
 1 file changed, 9 insertions(+)

qemu-server: Stefan Reiter (5):
  Use get_host_arch from PVE::Tools
  refactor: create QemuSchema and move file/dir code
  refactor: Move check_running to QemuConfig
  refactor: create PVE::QMP for high-level QMP access
  refactor: extract QEMU machine related helpers to package

 PVE/API2/Qemu.pm |  45 +++---
 PVE/API2/Qemu/Agent.pm   |   7 +-
 PVE/CLI/qm.pm|  27 ++--
 PVE/Makefile |   4 +-
 PVE/QMP.pm   |  72 +
 PVE/QMPClient.pm |   5 +-
 PVE/QemuConfig.pm|  93 +--
 PVE/QemuMigrate.pm   |  27 ++--
 PVE/QemuSchema.pm|  35 +
 PVE/QemuServer.pm| 295 ---
 PVE/QemuServer/Agent.pm  |   6 +-
 PVE/QemuServer/ImportDisk.pm |   3 +-
 PVE/QemuServer/Machine.pm| 100 
 PVE/QemuServer/Makefile  |   1 +
 PVE/QemuServer/Memory.pm |  12 +-
 PVE/VZDump/QemuServer.pm |  23 +--
 test/snapshot-test.pm|  21 ++-
 17 files changed, 421 insertions(+), 355 deletions(-)
 create mode 100644 PVE/QMP.pm
 create mode 100644 PVE/QemuSchema.pm
 create mode 100644 PVE/QemuServer/Machine.pm

ha-manager: Stefan Reiter (2):
  refactor: check_running was moved to PVE::QemuConfig
  refactor: vm_qmp_command was moved to PVE::QMP

 src/PVE/HA/Resources/PVEVM.pm | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

manager: Stefan Reiter (2):
  refactor: check_running was moved to QemuConfig
  refactor: vm_mon_cmd was moved to PVE::QMP

 PVE/API2/Nodes.pm  | 6 +++---
 PVE/Service/pvestatd.pm| 3 ++-
 test/ReplicationTestEnv.pm | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 10/11] refactor: check_running was moved to QemuConfig

2019-10-28 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 PVE/API2/Nodes.pm  | 6 +++---
 test/ReplicationTestEnv.pm | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 9e731e05..0f30a518 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -1729,7 +1729,7 @@ __PACKAGE__->register_method ({
} elsif ($d->{type} eq 'qemu') {
$typeText = 'VM';
$default_delay = 3; # to reduce load
-   return if PVE::QemuServer::check_running($vmid, 1);
+   return if PVE::QemuConfig::check_running($vmid, 1);
print STDERR "Starting VM $vmid\n";
$upid = PVE::API2::Qemu->vm_start({node => 
$nodename, vmid => $vmid });
} else {
@@ -1775,7 +1775,7 @@ my $create_stop_worker = sub {
$upid = PVE::API2::LXC::Status->vm_shutdown({node => $nodename, vmid => 
$vmid,
 timeout => $timeout, forceStop => 
1 });
 } elsif ($type eq 'qemu') {
-   return if !PVE::QemuServer::check_running($vmid, 1);
+   return if !PVE::QemuConfig::check_running($vmid, 1);
my $timeout =  defined($down_timeout) ? int($down_timeout) : 60*3;
print STDERR "Stopping VM $vmid (timeout = $timeout seconds)\n";
$upid = PVE::API2::Qemu->vm_shutdown({node => $nodename, vmid => $vmid,
@@ -1894,7 +1894,7 @@ my $create_migrate_worker = sub {
$upid = PVE::API2::LXC->migrate_vm({node => $nodename, vmid => $vmid, 
target => $target,
restart => $online });
 } elsif ($type eq 'qemu') {
-   my $online = PVE::QemuServer::check_running($vmid, 1) ? 1 : 0;
+   my $online = PVE::QemuConfig::check_running($vmid, 1) ? 1 : 0;
print STDERR "Migrating VM $vmid\n";
$upid = PVE::API2::Qemu->migrate_vm({node => $nodename, vmid => $vmid, 
target => $target,
 online => $online });
diff --git a/test/ReplicationTestEnv.pm b/test/ReplicationTestEnv.pm
index fa106037..242e3842 100755
--- a/test/ReplicationTestEnv.pm
+++ b/test/ReplicationTestEnv.pm
@@ -249,7 +249,7 @@ sub setup {
lock => sub { $mocked_cfs_lock_file->('replication.cfg', undef, $_[0]); 
},
write => sub { $mocked_cfs_write_file->('replication.cfg', $_[0]); },
 );
-$pve_qemuserver_module->mock(check_running => sub { return 0; });
+$pve_qemuconfig_module->mock(check_running => sub { return 0; });
 $pve_qemuconfig_module->mock(load_config => $mocked_qemu_load_conf);
 
 $pve_lxc_config_module->mock(load_config => $mocked_lxc_load_conf);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server 06/11] refactor: create PVE::QMP for high-level QMP access

2019-10-28 Thread Stefan Reiter
...in addition to PVE::QMPClient for low-level.

Also move all references (most with exports, the methods are used a lot
and have unique enough names IMO) and fix tests.

References in __snapshot_create_vol_snapshots_hook (in QemuConfig) is an
exception, as using the exported functions breaks tests.

Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu.pm | 13 
 PVE/API2/Qemu/Agent.pm   |  7 ++--
 PVE/CLI/qm.pm| 11 +++---
 PVE/Makefile |  1 +
 PVE/QMP.pm   | 72 
 PVE/QemuConfig.pm| 15 +
 PVE/QemuMigrate.pm   | 21 ++--
 PVE/QemuServer.pm| 66 
 PVE/QemuServer/Agent.pm  |  3 +-
 PVE/QemuServer/Memory.pm |  9 ++---
 PVE/VZDump/QemuServer.pm | 13 
 test/snapshot-test.pm| 18 +++---
 12 files changed, 142 insertions(+), 107 deletions(-)
 create mode 100644 PVE/QMP.pm

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9912e4d..50a0592 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -21,6 +21,7 @@ use PVE::GuestHelpers;
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::QemuMigrate;
+use PVE::QMP qw(vm_mon_cmd vm_qmp_command);
 use PVE::RPCEnvironment;
 use PVE::AccessControl;
 use PVE::INotify;
@@ -1835,8 +1836,8 @@ __PACKAGE__->register_method({
my ($ticket, undef, $remote_viewer_config) =
PVE::AccessControl::remote_viewer_config($authuser, $vmid, $node, 
$proxy, $title, $port);
 
-   PVE::QemuServer::vm_mon_cmd($vmid, "set_password", protocol => 'spice', 
password => $ticket);
-   PVE::QemuServer::vm_mon_cmd($vmid, "expire_password", protocol => 
'spice', time => "+30");
+   vm_mon_cmd($vmid, "set_password", protocol => 'spice', password => 
$ticket);
+   vm_mon_cmd($vmid, "expire_password", protocol => 'spice', time => 
"+30");
 
return $remote_viewer_config;
 }});
@@ -2261,7 +2262,7 @@ __PACKAGE__->register_method({
# checking the qmp status here to get feedback to the gui/cli/api
# and the status query should not take too long
my $qmpstatus = eval {
-   PVE::QemuServer::vm_qmp_command($vmid, { execute => "query-status" 
}, 0);
+   vm_qmp_command($vmid, { execute => "query-status" }, 0);
};
my $err = $@ if $@;
 
@@ -2341,7 +2342,7 @@ __PACKAGE__->register_method({
my $vmid = extract_param($param, 'vmid');
 
my $qmpstatus = eval {
-   PVE::QemuServer::vm_qmp_command($vmid, { execute => "query-status" 
}, 0);
+   vm_qmp_command($vmid, { execute => "query-status" }, 0);
};
my $err = $@ if $@;
 
@@ -3093,7 +3094,7 @@ __PACKAGE__->register_method({
PVE::QemuConfig->write_config($vmid, $conf);
 
if ($running && 
PVE::QemuServer::parse_guest_agent($conf)->{fstrim_cloned_disks} && 
PVE::QemuServer::qga_check_running($vmid)) {
-   eval { PVE::QemuServer::vm_mon_cmd($vmid, 
"guest-fstrim"); };
+   eval { vm_mon_cmd($vmid, "guest-fstrim"); };
}
 
eval {
@@ -3449,7 +3450,7 @@ __PACKAGE__->register_method({
 
my $res = '';
eval {
-   $res = PVE::QemuServer::vm_human_monitor_command($vmid, 
$param->{command});
+   $res = PVE::QMP::vm_human_monitor_command($vmid, $param->{command});
};
$res = "ERROR: $@" if $@;
 
diff --git a/PVE/API2/Qemu/Agent.pm b/PVE/API2/Qemu/Agent.pm
index 839146c..da7111e 100644
--- a/PVE/API2/Qemu/Agent.pm
+++ b/PVE/API2/Qemu/Agent.pm
@@ -7,6 +7,7 @@ use PVE::RESTHandler;
 use PVE::JSONSchema qw(get_standard_option);
 use PVE::QemuServer;
 use PVE::QemuServer::Agent qw(agent_available agent_cmd check_agent_error);
+use PVE::QMP qw(vm_mon_cmd);
 use MIME::Base64 qw(encode_base64 decode_base64);
 use JSON;
 
@@ -190,7 +191,7 @@ sub register_command {
agent_available($vmid, $conf);
 
my $cmd = $param->{command} // $command;
-   my $res = PVE::QemuServer::vm_mon_cmd($vmid, "guest-$cmd");
+   my $res = vm_mon_cmd($vmid, "guest-$cmd");
 
return { result => $res };
}});
@@ -415,7 +416,7 @@ __PACKAGE__->register_method({
my $content = "";
 
while ($bytes_left > 0 && !$eof) {
-   my $read = PVE::QemuServer::vm_mon_cmd($vmid, "guest-file-read", 
handle => $qgafh, count => int($read_size));
+   my $read = vm_mon_cmd($vmid, "guest-file-read", handle => $qgafh, 
count => int($read_size));
check_agent_error($read, "can't read from file");
 
$content .= decode_base64($read->{'buf-b64'});
@@ -423,7 +424,7 @@ __PACKAGE__->register_method({
$eof = $read->{eof} // 0;
}
 
-   my $res = PVE::QemuServer::vm_mon_cmd($vmid, "guest-file-close", handle 
=> $qgafh);
+   my $res = vm_mon_cmd($vmid, "guest-file-close", handle => $qgafh);
check_agent_error($res, "can't close fil

[pve-devel] [PATCH v2 qemu-server 03/11] Use get_host_arch from PVE::Tools

2019-10-28 Thread Stefan Reiter
...now that it no longer does LXC-specific stuff. Removes a FIXME.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer.pm | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b635760..9af690a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -36,7 +36,7 @@ use PVE::SafeSyslog;
 use PVE::Storage;
 use PVE::SysFSTools;
 use PVE::Systemd;
-use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
dir_glob_foreach $IPV6RE);
+use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
dir_glob_foreach get_host_arch $IPV6RE);
 
 use PVE::QMPClient;
 use PVE::QemuConfig;
@@ -3417,12 +3417,6 @@ sub vga_conf_has_spice {
 return $1 || 1;
 }
 
-my $host_arch; # FIXME: fix PVE::Tools::get_host_arch
-sub get_host_arch() {
-$host_arch = (POSIX::uname())[4] if !$host_arch;
-return $host_arch;
-}
-
 sub is_native($) {
 my ($arch) = @_;
 return get_host_arch() eq $arch;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 11/11] refactor: vm_mon_cmd was moved to PVE::QMP

2019-10-28 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 PVE/Service/pvestatd.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index bad1b73d..d8c86886 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -18,6 +18,7 @@ use PVE::Network;
 use PVE::Cluster qw(cfs_read_file);
 use PVE::Storage;
 use PVE::QemuServer;
+use PVE::QMP;
 use PVE::LXC;
 use PVE::LXC::Config;
 use PVE::RPCEnvironment;
@@ -180,7 +181,7 @@ sub auto_balloning {
if ($absdiff > 0) {
&$log("BALLOON $vmid to $res->{$vmid} ($diff)\n");
eval {
-   PVE::QemuServer::vm_mon_cmd($vmid, "balloon", 
+   PVE::QMP::vm_mon_cmd($vmid, "balloon", 
value => int($res->{$vmid}));
};
warn $@ if $@;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH common 1/2] JSONSchema: add pve-tfa-secret option and format

2019-10-28 Thread Thomas Lamprecht
On 10/28/19 12:20 PM, Wolfgang Bumiller wrote:
> Signed-off-by: Wolfgang Bumiller 
> ---
>  src/PVE/JSONSchema.pm | 24 
>  1 file changed, 24 insertions(+)
> 
> diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
> index db38d44..3712872 100644
> --- a/src/PVE/JSONSchema.pm
> +++ b/src/PVE/JSONSchema.pm
> @@ -530,6 +530,30 @@ 
> PVE::JSONSchema::register_standard_option('pve-startup-order', {
>  typetext => '[[order=]\d+] [,up=\d+] [,down=\d+] ',
>  });
>  
> +register_format('pve-tfa-secret', \&pve_verify_tfa_secret);
> +sub pve_verify_tfa_secret {
> +my ($key, $noerr) = @_;
> +
> +# The old format used 16 base32 chars or 40 hex digits. Since they have 
> a common subset it's
> +# hard to distinguish them without the our previous length constraints, 
> so add a 'v2' of the
> +# format to support arbitrary lengths properly:
> +if ($key =~ /^v2-0x[0-9a-fA-F]{16,128}$/ || # hex
> +$key =~ /^v2-[A-Z2-7=]{16,128}$/ || # base32
> +$key =~ /^(?:[A-Z2-7=]{16}|[A-Fa-f0-9]{40})$/) # and the old pattern 
> copy&pasted
> +{
> + return $key;
> +}
> +
> +return undef if $noerr;
> +
> +die "unable to decode TFA secret\n";
> +}
> +
> +register_standard_option('pve-tfa-secret', {
> +description => "A TFA secret, base32 encoded or hexadecimal.",
> +type => 'string', format => 'pve-tfa-secret',
> +});
> +

Why do you register a standard option but then do not use it? 
But actually, I like using the format more, IMO this is essential to
PVE/MG, and thus should not be a standard-option at all, so I'd rather
just remove the registering here, and keep the access-control API patch
as is.

>  sub check_format {
>  my ($format, $value, $path) = @_;
>  
> 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] hugepages: fix memory size checking

2019-10-28 Thread Stefan Reiter
The codepath for "any" hugepages did not check if memory size was even,
leading to the code below trying to allocate half a hugepage (e.g. VM
with 2049MiB RAM would lead to 1024.5 2kB hugepages).

Also improve error message for systems with only 1GB hugepages enabled.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer/Memory.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index b4c9129..ab7d2c3 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -404,10 +404,11 @@ sub hugepages_size {
if ($gb_exists && ($size % 1024 == 0)) {
return 1024;
} elsif (-d "/sys/kernel/mm/hugepages/hugepages-2048kB") {
+   die "memory size must be even to use hugepages\n" if $size % 2 != 0;
return 2;
}
 
-   die "your system doesn't support hugepages for memory size $size\n"
+   die "your system doesn't support hugepages for memory size $size (1GB 
hugepages would be supported)\n"
if $gb_exists;
 
die "your system doesn't support hugepages\n";
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH common 1/2] JSONSchema: add pve-tfa-secret option and format

2019-10-28 Thread Wolfgang Bumiller
On Mon, Oct 28, 2019 at 02:26:28PM +0100, Thomas Lamprecht wrote:
> On 10/28/19 12:20 PM, Wolfgang Bumiller wrote:
> > Signed-off-by: Wolfgang Bumiller 
> > ---
> >  src/PVE/JSONSchema.pm | 24 
> >  1 file changed, 24 insertions(+)
> > 
> > diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
> > index db38d44..3712872 100644
> > --- a/src/PVE/JSONSchema.pm
> > +++ b/src/PVE/JSONSchema.pm
> > @@ -530,6 +530,30 @@ 
> > PVE::JSONSchema::register_standard_option('pve-startup-order', {
> >  typetext => '[[order=]\d+] [,up=\d+] [,down=\d+] ',
> >  });
> >  
> > +register_format('pve-tfa-secret', \&pve_verify_tfa_secret);
> > +sub pve_verify_tfa_secret {
> > +my ($key, $noerr) = @_;
> > +
> > +# The old format used 16 base32 chars or 40 hex digits. Since they 
> > have a common subset it's
> > +# hard to distinguish them without the our previous length 
> > constraints, so add a 'v2' of the
> > +# format to support arbitrary lengths properly:
> > +if ($key =~ /^v2-0x[0-9a-fA-F]{16,128}$/ || # hex
> > +$key =~ /^v2-[A-Z2-7=]{16,128}$/ || # base32
> > +$key =~ /^(?:[A-Z2-7=]{16}|[A-Fa-f0-9]{40})$/) # and the old 
> > pattern copy&pasted
> > +{
> > +   return $key;
> > +}
> > +
> > +return undef if $noerr;
> > +
> > +die "unable to decode TFA secret\n";
> > +}
> > +
> > +register_standard_option('pve-tfa-secret', {
> > +description => "A TFA secret, base32 encoded or hexadecimal.",
> > +type => 'string', format => 'pve-tfa-secret',
> > +});
> > +
> 
> Why do you register a standard option but then do not use it? 
> But actually, I like using the format more, IMO this is essential to
> PVE/MG, and thus should not be a standard-option at all, so I'd rather
> just remove the registering here, and keep the access-control API patch
> as is.

Right, I did the pve-common change first and thought it would make sense
as an option (as the description would be used in pve & pmg), but then
in pve-access-control thought the meaning of the actual API parameter
might change in the future with updates/changes to second factors and
then did not remove it afterwards, sorry.

Should I resend or will you fix it up when applying?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager] gui: add revert button for lxc pending changes

2019-10-28 Thread Oguz Bektas
adds the pending button for Resources, Options and DNS screens.

Signed-off-by: Oguz Bektas 
---

v1 -> v2:
* fix typo
* use 'datachanged' to track the status of the buttons, however:
for some reason it takes a while to refresh the status of the button,
also same on the qemu side so this is likely a problem somewhere else.
it also doesn't work correctly on the DNS.js file (doesn't refresh at
all), but i can't figure out why, some feedback would be appreciated.



 www/manager6/lxc/DNS.js   | 43 ++--
 www/manager6/lxc/Options.js   | 62 +--
 www/manager6/lxc/Resources.js | 31 +-
 3 files changed, 131 insertions(+), 5 deletions(-)

diff --git a/www/manager6/lxc/DNS.js b/www/manager6/lxc/DNS.js
index 89e2c694..d7f29209 100644
--- a/www/manager6/lxc/DNS.js
+++ b/www/manager6/lxc/DNS.js
@@ -213,6 +213,38 @@ Ext.define('PVE.lxc.DNS', {
handler: run_editor
});
 
+   var revert_btn = new Proxmox.button.Button({
+   text: gettext('Revert'),
+   disabled: true,
+   handler: function() {
+   var sm = me.getSelectionModel();
+   var rec = sm.getSelection()[0];
+   if (!rec) {
+   return;
+   }
+
+   var rowdef = me.rows[rec.data.key] || {};
+   var keys = rowdef.multiKey ||  [ rec.data.key ];
+   var revert = keys.join(',');
+
+   Proxmox.Utils.API2Request({
+   url: '/api2/extjs/' + baseurl,
+   waitMsgTarget: me,
+   method: 'PUT',
+   params: {
+   'revert': revert
+   },
+   callback: function() {
+   me.reload();
+   },
+   failure: function (response, opts) {
+   Ext.Msg.alert('Error',response.htmlStatus);
+   }
+   });
+   }
+   });
+
+
var set_button_status = function() {
var sm = me.getSelectionModel();
var rec = sm.getSelection()[0];
@@ -221,8 +253,11 @@ Ext.define('PVE.lxc.DNS', {
edit_btn.disable();
return;
}
-   var rowdef = rows[rec.data.key];
+   var key = rec.data.key;
+   var rowdef = rows[key];
+   var pending = rec.data['delete'] || me.hasPendingChanges(key);
edit_btn.setDisabled(!rowdef.editor);
+   revert_btn.setDisabled(!pending);
};
 
Ext.apply(me, {
@@ -230,7 +265,7 @@ Ext.define('PVE.lxc.DNS', {
selModel: sm,
cwidth1: 150,
run_editor: run_editor,
-   tbar: [ edit_btn ],
+   tbar: [ edit_btn, revert_btn ],
rows: rows,
editorConfig: {
url: "/api2/extjs/" + baseurl
@@ -243,5 +278,9 @@ Ext.define('PVE.lxc.DNS', {
});
 
me.callParent();
+
+   me.mon(me.rstore, 'datachanged', function() {
+   set_button_status();
+   });
 }
 });
diff --git a/www/manager6/lxc/Options.js b/www/manager6/lxc/Options.js
index 5e1e0222..f1a82902 100644
--- a/www/manager6/lxc/Options.js
+++ b/www/manager6/lxc/Options.js
@@ -161,17 +161,67 @@ Ext.define('PVE.lxc.Options', {
handler: function() { me.run_editor(); }
});
 
+   var revert_btn = new Proxmox.button.Button({
+   text: gettext('Revert'),
+   disabled: true,
+   handler: function() {
+   var sm = me.getSelectionModel();
+   var rec = sm.getSelection()[0];
+   if (!rec) {
+   return;
+   }
+
+   var rowdef = me.rows[rec.data.key] || {};
+   var keys = rowdef.multiKey ||  [ rec.data.key ];
+   var revert = keys.join(',');
+
+   Proxmox.Utils.API2Request({
+   url: '/api2/extjs/' + baseurl,
+   waitMsgTarget: me,
+   method: 'PUT',
+   params: {
+   'revert': revert
+   },
+   callback: function() {
+   me.reload();
+   },
+   failure: function (response, opts) {
+   Ext.Msg.alert('Error',response.htmlStatus);
+   }
+   });
+   }
+   });
+
+   var set_button_status = function() {
+   var sm = me.getSelectionModel();
+   var rec = sm.getSelection()[0];
+
+   if (!rec) {
+   edit_btn.disable();
+   return;
+   }
+
+   var key = rec.data.key;
+   var pending = rec.data['delete'] || me.hasPendingChanges(key);
+   var rowdef = rows[key];
+
+   edit_btn.setDisabled(!rowdef.editor);
+   revert_btn.setDisabled(!pending);
+   };
+
+
Ext.apply(me, {
url: "/api

[pve-devel] applied: [PATCH common 1/2] JSONSchema: add pve-tfa-secret option and format

2019-10-28 Thread Thomas Lamprecht
On 10/28/19 3:13 PM, Wolfgang Bumiller wrote:
> On Mon, Oct 28, 2019 at 02:26:28PM +0100, Thomas Lamprecht wrote:
>> On 10/28/19 12:20 PM, Wolfgang Bumiller wrote:
>>> +register_standard_option('pve-tfa-secret', {
>>> +description => "A TFA secret, base32 encoded or hexadecimal.",
>>> +type => 'string', format => 'pve-tfa-secret',
>>> +});
>>> +
>>
>> Why do you register a standard option but then do not use it? 
>> But actually, I like using the format more, IMO this is essential to
>> PVE/MG, and thus should not be a standard-option at all, so I'd rather
>> just remove the registering here, and keep the access-control API patch
>> as is.
> 
> Right, I did the pve-common change first and thought it would make sense
> as an option (as the description would be used in pve & pmg), but then
> in pve-access-control thought the meaning of the actual API parameter
> might change in the future with updates/changes to second factors and
> then did not remove it afterwards, sorry.
> 
> Should I resend or will you fix it up when applying?
> 

I've fixed this patch up and applied it, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH common 2/2] OTP: support v2 secret format

2019-10-28 Thread Thomas Lamprecht
On 10/28/19 12:20 PM, Wolfgang Bumiller wrote:
> Signed-off-by: Wolfgang Bumiller 
> ---
>  src/PVE/OTP.pm | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH container] iterate pending config changes sorted

2019-10-28 Thread Thomas Lamprecht
On 10/23/19 6:48 PM, Oguz Bektas wrote:
> since we sort them while going through the delete hash, we can do it for
> the other loops for consistency.
> 
> Signed-off-by: Oguz Bektas 
> ---
>  src/PVE/LXC/Config.pm | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH v2 container 02/11] Move LXC-specific architecture translation here

2019-10-28 Thread Thomas Lamprecht
On 10/28/19 12:59 PM, Stefan Reiter wrote:
> This is the only time we need to do this translation, moving it here
> allows reuse of the PVE::Tools function.
> 
> Signed-off-by: Stefan Reiter 
> ---
>  src/PVE/LXC/Setup.pm | 9 +
>  1 file changed, 9 insertions(+)
> 
> diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
> index 845aced..ca6fc4f 100644
> --- a/src/PVE/LXC/Setup.pm
> +++ b/src/PVE/LXC/Setup.pm
> @@ -293,6 +293,15 @@ sub pre_start_hook {
>  
>  my $host_arch = PVE::Tools::get_host_arch();
>  
> +# containers use different architecture names
> +if ($host_arch eq 'x86_64') {
> + $host_arch = 'amd64';
> +} elsif ($host_arch eq 'aarch64') {
> + $host_arch = 'arm64';
> +} else {
> + die "unsupported host architecture '$host_arch'\n";
> +}
> +
>  my $container_arch = $self->{conf}->{arch};
>  
>  $container_arch = 'amd64' if $container_arch eq 'i386'; # always use 64 
> bit version
> 

applied, at least this does not needs the common change, else we'd
have a full breaks-depends again ^^ A single breaks is way easier
to deal with :)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 common 01/11] Make get_host_arch return raw uname entry

2019-10-28 Thread Thomas Lamprecht
On 10/28/19 12:59 PM, Stefan Reiter wrote:
> The current version had only one user in LXC, so move the LXC-specific
> code there to reuse this in QemuServer.
> 
> Also cache, since the host's architecture can't change during runtime.
> 
> Signed-off-by: Stefan Reiter 
> ---
>  src/PVE/Tools.pm | 17 +
>  1 file changed, 5 insertions(+), 12 deletions(-)
> 
> diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
> index 550da09..c9d37ec 100644
> --- a/src/PVE/Tools.pm
> +++ b/src/PVE/Tools.pm
> @@ -47,6 +47,7 @@ safe_print
>  trim
>  extract_param
>  file_copy
> +get_host_arch
>  O_PATH
>  O_TMPFILE
>  );
> @@ -1630,18 +1631,10 @@ sub readline_nointr {
>  return $line;
>  }
>  
> -sub get_host_arch {
> -
> -my @uname = POSIX::uname();
> -my $machine = $uname[4];
> -
> -if ($machine eq 'x86_64') {
> - return 'amd64';
> -} elsif ($machine eq 'aarch64') {
> - return 'arm64';
> -} else {
> - die "unsupported host architecture '$machine'\n";
> -}
> +my $host_arch;
> +sub get_host_arch() {

was the perl prototype wanted or was it by mistake? ^^

For you and/or others information, empty prototypes suggest perl to
inline that method, as it's seen as constant method[0].
But here, the explicit return renders that behavior void.
Point is, perl prototypes are confusing for most people, and are mostly
useful to use a submethod like an built-in method..

[0]: https://perldoc.perl.org/perlsub.html#Constant-Functions

> +$host_arch = (POSIX::uname())[4] if !$host_arch;
> +return $host_arch;
>  }
>  
>  # Devices are: [ (12 bits minor) (12 bits major) (8 bits minor) ]
> 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH v2 common 01/11] Make get_host_arch return raw uname entry

2019-10-28 Thread Thomas Lamprecht
On 10/28/19 12:59 PM, Stefan Reiter wrote:
> The current version had only one user in LXC, so move the LXC-specific
> code there to reuse this in QemuServer.
> 
> Also cache, since the host's architecture can't change during runtime.
> 
> Signed-off-by: Stefan Reiter 
> ---
>  src/PVE/Tools.pm | 17 +
>  1 file changed, 5 insertions(+), 12 deletions(-)
> 


applied, but dropped the prototype for now, even if /could/ makes sense
(or at least not hurt) it feels like added by mistake, and as you commit
message nowhere points out why this was done in this patch I want to rather
be on the safe side (and need to bump common ;))

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 common 01/11] Make get_host_arch return raw uname entry

2019-10-28 Thread Thomas Lamprecht
On 10/28/19 12:59 PM, Stefan Reiter wrote:
> The current version had only one user in LXC, so move the LXC-specific
> code there to reuse this in QemuServer.
> 
> Also cache, since the host's architecture can't change during runtime.
> 
> Signed-off-by: Stefan Reiter 
> ---
>  src/PVE/Tools.pm | 17 +
>  1 file changed, 5 insertions(+), 12 deletions(-)
> 
> diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
> index 550da09..c9d37ec 100644
> --- a/src/PVE/Tools.pm
> +++ b/src/PVE/Tools.pm
> @@ -47,6 +47,7 @@ safe_print
>  trim
>  extract_param
>  file_copy
> +get_host_arch
>  O_PATH
>  O_TMPFILE
>  );

Oh, and you also never mention the export anywhere, if it was only
used like once previously I'd guess that the usage of this does not
explodes in the near future ^^ I'll let this is, but I'd like to not
add everthing to the exporter, especially low use methods should be
only added with good resaons (documented, f.e., in the commit
message ;) )

> @@ -1630,18 +1631,10 @@ sub readline_nointr {
>  return $line;
>  }
>  
> -sub get_host_arch {
> -
> -my @uname = POSIX::uname();
> -my $machine = $uname[4];
> -
> -if ($machine eq 'x86_64') {
> - return 'amd64';
> -} elsif ($machine eq 'aarch64') {
> - return 'arm64';
> -} else {
> - die "unsupported host architecture '$machine'\n";
> -}
> +my $host_arch;
> +sub get_host_arch() {
> +$host_arch = (POSIX::uname())[4] if !$host_arch;
> +return $host_arch;
>  }
>  
>  # Devices are: [ (12 bits minor) (12 bits major) (8 bits minor) ]
> 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel