Re: [pve-devel] [PATCH v2 ha-manager 07/12] Add stop command to the API
On 9/30/19 5:13 PM, Thomas Lamprecht wrote: On 9/30/19 9:22 AM, Fabian Ebner wrote: Signed-off-by: Fabian Ebner --- src/PVE/API2/HA/Resources.pm | 37 src/PVE/CLI/ha_manager.pm| 2 ++ 2 files changed, 39 insertions(+) besides the nit^Wfact that the commit subject should read "Add stop command to the API and CLI", not sure if this is required? Couldn't we just have a common helper somewhere and then the qm stop or pct stop just calls that (indirectly?). Just need to rethink concepts for this one, code itself looks OK. Isn't the separation between the Qemu/LXC code and HA manager better like this? It seems like the calls from inside Qemu/LXC code that handle services use the 'ha-manager' command and so I thought it would be natural to use it here as well. Where would a good place to put the helper be? diff --git a/src/PVE/API2/HA/Resources.pm b/src/PVE/API2/HA/Resources.pm index 2b62ee8..ecc5f0c 100644 --- a/src/PVE/API2/HA/Resources.pm +++ b/src/PVE/API2/HA/Resources.pm @@ -353,4 +353,41 @@ __PACKAGE__->register_method ({ return undef; }}); +__PACKAGE__->register_method ({ +name => 'stop', +protected => 1, +path => '{sid}/stop', +method => 'POST', +description => "Request the service to be stopped.", +permissions => { + check => ['perm', '/', [ 'Sys.Console' ]], +}, +parameters => { + additionalProperties => 0, + properties => { + sid => get_standard_option('pve-ha-resource-or-vm-id', + { completion => \&PVE::HA::Tools::complete_sid }), + timeout => { + description => "Timeout in seconds. If set to 0 a hard stop will be performed.", + type => 'integer', + minimum => 0, + optional => 1, + }, + }, +}, +returns => { type => 'null' }, +code => sub { + my ($param) = @_; + + my ($sid, $type, $name) = PVE::HA::Config::parse_sid(extract_param($param, 'sid')); + + PVE::HA::Config::service_is_ha_managed($sid); + + check_service_state($sid); + + PVE::HA::Config::queue_crm_commands("stop $sid $param->{timeout}"); + + return undef; +}}); + 1; diff --git a/src/PVE/CLI/ha_manager.pm b/src/PVE/CLI/ha_manager.pm index 5ce4c30..c9d4e7f 100644 --- a/src/PVE/CLI/ha_manager.pm +++ b/src/PVE/CLI/ha_manager.pm @@ -114,6 +114,8 @@ our $cmddef = { migrate => [ "PVE::API2::HA::Resources", 'migrate', ['sid', 'node'] ], relocate => [ "PVE::API2::HA::Resources", 'relocate', ['sid', 'node'] ], +stop => [ "PVE::API2::HA::Resources", 'stop', ['sid', 'timeout'] ], + }; 1; ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH container 3/3] use snapshot_tree guest helper for pct listsnapshot
adds feature parity between qm/pct 'listsnapshot' w.r.t. showing snapshot tree ordered by date. Signed-off-by: Oguz Bektas --- src/PVE/CLI/pct.pm | 11 +-- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm index 35ad72f..0dd694b 100755 --- a/src/PVE/CLI/pct.pm +++ b/src/PVE/CLI/pct.pm @@ -861,16 +861,7 @@ our $cmddef = { delsnapshot => [ "PVE::API2::LXC::Snapshot", 'delsnapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ], -listsnapshot => [ "PVE::API2::LXC::Snapshot", 'list', ['vmid'], { node => $nodename }, - sub { - my $res = shift; - foreach my $e (@$res) { - my $headline = $e->{description} || 'no-description'; - $headline =~ s/\n.*//sg; - my $parent = $e->{parent} // 'no-parent'; - printf("%-20s %-20s %s\n", $e->{name}, $parent, $headline); - } - }], +listsnapshot => [ "PVE::API2::LXC::Snapshot", 'list', ['vmid'], { node => $nodename },\&PVE::GuestHelpers::snapshot_tree ], rollback => [ "PVE::API2::LXC::Snapshot", 'rollback', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ], -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu-server 2/3] use snapshot_tree guest helper for qm listsnapshot
Signed-off-by: Oguz Bektas --- PVE/CLI/qm.pm | 53 +-- 1 file changed, 1 insertion(+), 52 deletions(-) diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm index 17935d0..40571ce 100755 --- a/PVE/CLI/qm.pm +++ b/PVE/CLI/qm.pm @@ -933,58 +933,7 @@ our $cmddef = { delsnapshot => [ "PVE::API2::Qemu", 'delsnapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ], -listsnapshot => [ "PVE::API2::Qemu", 'snapshot_list', ['vmid'], { node => $nodename }, - sub { - my $res = shift; - - my $snapshots = { map { $_->{name} => $_ } @$res }; - - my @roots; - foreach my $e (@$res) { - my $parent; - if (($parent = $e->{parent}) && defined $snapshots->{$parent}) { - push @{$snapshots->{$parent}->{children}}, $e->{name}; - } else { - push @roots, $e->{name}; - } - } - - # sort the elements by snaptime - with "current" (no snaptime) highest - my $snaptimesort = sub { - return +1 if !defined $snapshots->{$a}->{snaptime}; - return -1 if !defined $snapshots->{$b}->{snaptime}; - return $snapshots->{$a}->{snaptime} <=> $snapshots->{$b}->{snaptime}; - }; - - # recursion function for displaying the tree - my $snapshottree; - $snapshottree = sub { - my ($prefix, $root, $snapshots) = @_; - my $e = $snapshots->{$root}; - - my $description = $e->{description} || 'no-description'; - ($description) = $description =~ m/(.*)$/m; - - my $timestring = ""; - if (defined $e->{snaptime}) { - $timestring = strftime("%F %H:%M:%S", localtime($e->{snaptime})); - } - - my $len = 30 - length($prefix); # for aligning the description - printf("%s %-${len}s %-23s %s\n", $prefix, $root, $timestring, $description); - - if ($e->{children}) { - $prefix = "$prefix"; - foreach my $child (sort $snaptimesort @{$e->{children}}) { - $snapshottree->($prefix, $child, $snapshots); - } - } - }; - - foreach my $root (sort $snaptimesort @roots) { - $snapshottree->('`->', $root, $snapshots); - } - }], +listsnapshot => [ "PVE::API2::Qemu", 'snapshot_list', ['vmid'], { node => $nodename }, \&PVE::GuestHelpers::snapshot_tree], rollback => [ "PVE::API2::Qemu", 'rollback', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ], -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH guest-common 1/3] reformat code for snapshot tree into GuestHelpers
qm/pct listsnapshot lack feature parity w.r.t. showing snapshots in a tree ordered by the date. by moving this code into GuestHelpers, it can be shared. Signed-off-by: Oguz Bektas --- PVE/GuestHelpers.pm | 54 + 1 file changed, 54 insertions(+) diff --git a/PVE/GuestHelpers.pm b/PVE/GuestHelpers.pm index ebe2781..64e26c9 100644 --- a/PVE/GuestHelpers.pm +++ b/PVE/GuestHelpers.pm @@ -6,6 +6,8 @@ use warnings; use PVE::Tools; use PVE::Storage; +use POSIX qw(strftime); + # We use a separate lock to block migration while a replication job # is running. @@ -60,4 +62,56 @@ sub exec_hookscript { } } +sub snapshot_tree { +my ($res) = @_; + +my $snapshots = { map { $_->{name} => $_ } @$res }; + +my @roots; +foreach my $e (@$res) { + my $parent; + if (($parent = $e->{parent}) && defined $snapshots->{$parent}) { + push @{$snapshots->{$parent}->{children}}, $e->{name}; + } else { + push @roots, $e->{name}; + } +} + +# sort the elements by snaptime - with "current" (no snaptime) highest +my $snaptimesort = sub { + return +1 if !defined $snapshots->{$a}->{snaptime}; + return -1 if !defined $snapshots->{$b}->{snaptime}; + return $snapshots->{$a}->{snaptime} <=> $snapshots->{$b}->{snaptime}; +}; + +# recursion function for displaying the tree +my $snapshottree; +$snapshottree = sub { + my ($prefix, $root, $snapshots) = @_; + my $e = $snapshots->{$root}; + + my $description = $e->{description} || 'no-description'; + ($description) = $description =~ m/(.*)$/m; + + my $timestring = ""; + if (defined $e->{snaptime}) { + $timestring = strftime("%F %H:%M:%S", localtime($e->{snaptime})); + } + + my $len = 30 - length($prefix); # for aligning the description + printf("%s %-${len}s %-23s %s\n", $prefix, $root, $timestring, $description); + + if ($e->{children}) { + $prefix = "$prefix"; + foreach my $child (sort $snaptimesort @{$e->{children}}) { + $snapshottree->($prefix, $child, $snapshots); + } + } +}; + +foreach my $root (sort $snaptimesort @roots) { + $snapshottree->('`->', $root, $snapshots); +} +} + 1; -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH guest-common 1/3] reformat code for snapshot tree into GuestHelpers
correction: s/reformat/refactor On Tue, Oct 01, 2019 at 09:32:45AM +0200, Oguz Bektas wrote: > qm/pct listsnapshot lack feature parity w.r.t. showing snapshots in a > tree ordered by the date. by moving this code into GuestHelpers, it can > be shared. > > Signed-off-by: Oguz Bektas > --- > PVE/GuestHelpers.pm | 54 + > 1 file changed, 54 insertions(+) > > diff --git a/PVE/GuestHelpers.pm b/PVE/GuestHelpers.pm > index ebe2781..64e26c9 100644 > --- a/PVE/GuestHelpers.pm > +++ b/PVE/GuestHelpers.pm > @@ -6,6 +6,8 @@ use warnings; > use PVE::Tools; > use PVE::Storage; > > +use POSIX qw(strftime); > + > # We use a separate lock to block migration while a replication job > # is running. > > @@ -60,4 +62,56 @@ sub exec_hookscript { > } > } > > +sub snapshot_tree { > +my ($res) = @_; > + > +my $snapshots = { map { $_->{name} => $_ } @$res }; > + > +my @roots; > +foreach my $e (@$res) { > + my $parent; > + if (($parent = $e->{parent}) && defined $snapshots->{$parent}) { > + push @{$snapshots->{$parent}->{children}}, $e->{name}; > + } else { > + push @roots, $e->{name}; > + } > +} > + > +# sort the elements by snaptime - with "current" (no snaptime) highest > +my $snaptimesort = sub { > + return +1 if !defined $snapshots->{$a}->{snaptime}; > + return -1 if !defined $snapshots->{$b}->{snaptime}; > + return $snapshots->{$a}->{snaptime} <=> $snapshots->{$b}->{snaptime}; > +}; > + > +# recursion function for displaying the tree > +my $snapshottree; > +$snapshottree = sub { > + my ($prefix, $root, $snapshots) = @_; > + my $e = $snapshots->{$root}; > + > + my $description = $e->{description} || 'no-description'; > + ($description) = $description =~ m/(.*)$/m; > + > + my $timestring = ""; > + if (defined $e->{snaptime}) { > + $timestring = strftime("%F %H:%M:%S", localtime($e->{snaptime})); > + } > + > + my $len = 30 - length($prefix); # for aligning the description > + printf("%s %-${len}s %-23s %s\n", $prefix, $root, $timestring, > $description); > + > + if ($e->{children}) { > + $prefix = "$prefix"; > + foreach my $child (sort $snaptimesort @{$e->{children}}) { > + $snapshottree->($prefix, $child, $snapshots); > + } > + } > +}; > + > +foreach my $root (sort $snaptimesort @roots) { > + $snapshottree->('`->', $root, $snapshots); > +} > +} > + > 1; > -- > 2.20.1 > > ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-zsync] Allow detecting a syncing instance of a job
On September 30, 2019 12:55 pm, Fabian Ebner wrote: > Before, the check whether a syncing instance of the same job is already > present > was inside the locked section. This caused cron to continuously spawn new > instances of pve-zsync on syncs (or rather groups of syncs) taking longer > than 15 minutes, see [0] in the forum. This patch introduces a new locked > section for checking the current status and a new 'waiting' status. > The 'waiting' status is needed to mark jobs which are currently waiting > for the lock for syncing. So if job A is syncing and job B is waiting for > the lock then all new instances of job B will see that one instance is > already scheduled to sync. > > [0]: > https://forum.proxmox.com/threads/pve-zsync-bug-spawns-endless-cron-processes.58087/ > > Signed-off-by: Fabian Ebner > --- > pve-zsync | 25 - > 1 file changed, 20 insertions(+), 5 deletions(-) > > diff --git a/pve-zsync b/pve-zsync > index 425ffa2..90c1bb3 100755 > --- a/pve-zsync > +++ b/pve-zsync > @@ -19,6 +19,7 @@ my $PVE_DIR = "/etc/pve/local"; > my $QEMU_CONF = "${PVE_DIR}/qemu-server"; > my $LXC_CONF = "${PVE_DIR}/lxc"; > my $LOCKFILE = "$CONFIG_PATH/${PROGNAME}.lock"; > +my $LOCKFILE_STATUS_CHECK = "$CONFIG_PATH/${PROGNAME}_status_check.lock"; > my $PROG_PATH = "$PATH/${PROGNAME}"; > my $INTERVAL = 15; > my $DEBUG; > @@ -578,20 +579,34 @@ sub destroy_job { > sub sync { > my ($param) = @_; > > +my $lock_status_check_fh = IO::File->new("> $LOCKFILE_STATUS_CHECK"); > +die "Can't open Lock File: $LOCKFILE_STATUS_CHECK $!\n" if > !$lock_status_check_fh; > +lock($lock_status_check_fh); > + > +my $job; > +eval { > + $job = get_job($param); > +}; > + > +if ($job && defined($job->{state}) && ($job->{state} eq "syncing" || > $job->{state} eq "waiting")) { > + unlock($lock_status_check_fh); > + die "Job --source $param->{source} --name $param->{name} is already > scheduled to sync\n"; > +} > + > +$job->{state} = "waiting"; > +update_state($job); > +unlock($lock_status_check_fh); I don't think we need a (new) lock here - it would only protect against other processes entering sync(), but they could at worst do the same change (again)? update_state() already has locking to prevent races around the modification itself, and this new lock does not protect against other read-modify-write cycles (like destroy_job, enable_job, disable_job, but also init which just does write ;)). if we introduce a new lock, it should be to guard all state r-m-w cycles against races with eachother (e.g., by implementing Thomas locked() suggestion, and pulling the lock from update_state there and updating all call sites). the other locks (sync, update_cron) can also re-use this locked mechanism, by passing in some lock identifier/filename (you can also have wrappers around locked, e.g. lock_state, lock_cron and lock_sync). > + > my $lock_fh = IO::File->new("> $LOCKFILE"); > die "Can't open Lock File: $LOCKFILE $!\n" if !$lock_fh; > lock($lock_fh); > > +#job might've changed while we waited for the lock, but we can be sure > it's not syncing > my $date = get_date(); > -my $job; > eval { > $job = get_job($param); > }; > > -if ($job && defined($job->{state}) && $job->{state} eq "syncing") { > - die "Job --source $param->{source} --name $param->{name} is syncing at > the moment"; > -} > - > my $dest = parse_target($param->{dest}); > my $source = parse_target($param->{source}); > > -- > 2.20.1 > > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-zsync] Allow detecting a syncing instance of a job
On 10/1/19 10:29 AM, Fabian Grünbichler wrote: On September 30, 2019 12:55 pm, Fabian Ebner wrote: Before, the check whether a syncing instance of the same job is already present was inside the locked section. This caused cron to continuously spawn new instances of pve-zsync on syncs (or rather groups of syncs) taking longer than 15 minutes, see [0] in the forum. This patch introduces a new locked section for checking the current status and a new 'waiting' status. The 'waiting' status is needed to mark jobs which are currently waiting for the lock for syncing. So if job A is syncing and job B is waiting for the lock then all new instances of job B will see that one instance is already scheduled to sync. [0]: https://forum.proxmox.com/threads/pve-zsync-bug-spawns-endless-cron-processes.58087/ Signed-off-by: Fabian Ebner --- pve-zsync | 25 - 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/pve-zsync b/pve-zsync index 425ffa2..90c1bb3 100755 --- a/pve-zsync +++ b/pve-zsync @@ -19,6 +19,7 @@ my $PVE_DIR = "/etc/pve/local"; my $QEMU_CONF = "${PVE_DIR}/qemu-server"; my $LXC_CONF = "${PVE_DIR}/lxc"; my $LOCKFILE = "$CONFIG_PATH/${PROGNAME}.lock"; +my $LOCKFILE_STATUS_CHECK = "$CONFIG_PATH/${PROGNAME}_status_check.lock"; my $PROG_PATH = "$PATH/${PROGNAME}"; my $INTERVAL = 15; my $DEBUG; @@ -578,20 +579,34 @@ sub destroy_job { sub sync { my ($param) = @_; +my $lock_status_check_fh = IO::File->new("> $LOCKFILE_STATUS_CHECK"); +die "Can't open Lock File: $LOCKFILE_STATUS_CHECK $!\n" if !$lock_status_check_fh; +lock($lock_status_check_fh); + +my $job; +eval { + $job = get_job($param); +}; + +if ($job && defined($job->{state}) && ($job->{state} eq "syncing" || $job->{state} eq "waiting")) { + unlock($lock_status_check_fh); + die "Job --source $param->{source} --name $param->{name} is already scheduled to sync\n"; +} + +$job->{state} = "waiting"; +update_state($job); +unlock($lock_status_check_fh); I don't think we need a (new) lock here - it would only protect against other processes entering sync(), but they could at worst do the same change (again)? update_state() already has locking to prevent races around the modification itself, and this new lock does not protect against other read-modify-write cycles (like destroy_job, enable_job, disable_job, but also init which just does write ;)). Yes, I was thinking that no other instance should get the old job state while another has not entered 'waiting' yet. Otherwise there might be an edge case where two instances spawned at the same time would both enter 'waiting'. But that wouldn't be very problematic actually. if we introduce a new lock, it should be to guard all state r-m-w cycles against races with eachother (e.g., by implementing Thomas locked() suggestion, and pulling the lock from update_state there and updating all call sites). the other locks (sync, update_cron) can also re-use this locked mechanism, by passing in some lock identifier/filename (you can also have wrappers around locked, e.g. lock_state, lock_cron and lock_sync). Ok, I'll try to implement a variant of this. + my $lock_fh = IO::File->new("> $LOCKFILE"); die "Can't open Lock File: $LOCKFILE $!\n" if !$lock_fh; lock($lock_fh); +#job might've changed while we waited for the lock, but we can be sure it's not syncing my $date = get_date(); -my $job; eval { $job = get_job($param); }; -if ($job && defined($job->{state}) && $job->{state} eq "syncing") { - die "Job --source $param->{source} --name $param->{name} is syncing at the moment"; -} - my $dest = parse_target($param->{dest}); my $source = parse_target($param->{source}); -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH storage] Use bigger timeouts for zfs operations
Seems like 'zfs destroy' can take longer than 5 seconds, see [0]. I changed the timeout to 15 seconds and also changed the default timeout to 10 instead of 5 seconds, to be on the safe side for other commands like 'zfs create'. [0]: https://forum.proxmox.com/threads/timeout-beim-l%C3%B6schen-des-entfernen-replikats-bei-entfernung-der-replikation.58467/ Signed-off-by: Fabian Ebner --- PVE/Storage/ZFSPoolPlugin.pm | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index f66b277..3ce06be 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage/ZFSPoolPlugin.pm @@ -182,7 +182,7 @@ sub zfs_request { my $msg = ''; my $output = sub { $msg .= "$_[0]\n" }; -$timeout = PVE::RPCEnvironment->is_worker() ? 60*60 : 5 if !$timeout; +$timeout = PVE::RPCEnvironment->is_worker() ? 60*60 : 10 if !$timeout; run_command($cmd, errmsg => "zfs error", outfunc => $output, timeout => $timeout); @@ -346,7 +346,7 @@ sub zfs_delete_zvol { for (my $i = 0; $i < 6; $i++) { - eval { $class->zfs_request($scfg, undef, 'destroy', '-r', "$scfg->{pool}/$zvol"); }; + eval { $class->zfs_request($scfg, 15, 'destroy', '-r', "$scfg->{pool}/$zvol"); }; if ($err = $@) { if ($err =~ m/^zfs error:(.*): dataset is busy.*/) { sleep(1); -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH storage] Use bigger timeouts for zfs operations
On October 1, 2019 12:17 pm, Fabian Ebner wrote: > Seems like 'zfs destroy' can take longer than 5 seconds, see [0]. > I changed the timeout to 15 seconds and also changed the default > timeout to 10 instead of 5 seconds, to be on the safe side > for other commands like 'zfs create'. > > [0]: > https://forum.proxmox.com/threads/timeout-beim-l%C3%B6schen-des-entfernen-replikats-bei-entfernung-der-replikation.58467/ NAK, we have a 30s timeout for synchronous API requests that call this, and they might do more than 2 zfs_requests. we'd rather timeout and have time to do error handling, than finish successfully sometimes, and die without cleanup other times. the real solution for this is to convert the remaining synchronous API calls that trigger storage operations to async ones, and switch the GUI over to use the new variants. we had previous discussions about this issue already. to implement it really nice, we'd need to things: - tasks with structured return value (to implement async content listing and similar operations, and make conversion of sync to async API calls easier) - light-weight / ephemeral tasks (not included in regular task lists/log, cleaned up after final poll / 1 day after finishing / ...) in case of the mentioned report, please investigate whether this call in 'pvesr' context is wrongly treated as sync API call (i.e., is_worker should maybe return true for pvesr calls?) > > Signed-off-by: Fabian Ebner > --- > PVE/Storage/ZFSPoolPlugin.pm | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm > index f66b277..3ce06be 100644 > --- a/PVE/Storage/ZFSPoolPlugin.pm > +++ b/PVE/Storage/ZFSPoolPlugin.pm > @@ -182,7 +182,7 @@ sub zfs_request { > my $msg = ''; > my $output = sub { $msg .= "$_[0]\n" }; > > -$timeout = PVE::RPCEnvironment->is_worker() ? 60*60 : 5 if !$timeout; > +$timeout = PVE::RPCEnvironment->is_worker() ? 60*60 : 10 if !$timeout; > > run_command($cmd, errmsg => "zfs error", outfunc => $output, timeout => > $timeout); > > @@ -346,7 +346,7 @@ sub zfs_delete_zvol { > > for (my $i = 0; $i < 6; $i++) { > > - eval { $class->zfs_request($scfg, undef, 'destroy', '-r', > "$scfg->{pool}/$zvol"); }; > + eval { $class->zfs_request($scfg, 15, 'destroy', '-r', > "$scfg->{pool}/$zvol"); }; > if ($err = $@) { > if ($err =~ m/^zfs error:(.*): dataset is busy.*/) { > sleep(1); > -- > 2.20.1 > > > ___ > pve-devel mailing list > pve-devel@pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH guest-common 1/3] reformat code for snapshot tree into GuestHelpers
On 10/1/19 9:32 AM, Oguz Bektas wrote: > qm/pct listsnapshot lack feature parity w.r.t. showing snapshots in a > tree ordered by the date. by moving this code into GuestHelpers, it can > be shared. > > Signed-off-by: Oguz Bektas > --- > PVE/GuestHelpers.pm | 54 + > 1 file changed, 54 insertions(+) > Applied with a followup (see below). I'll refrain from applying the qemu-server and container one, as this requires a versioned dependency bump, which your pending changes do also - I'd rather do one only ;) Also a v2 to match the followup method name change would be required too, but no pressure here. 8< commit 5ee5f8873ea494339fe7251b0a3e46d99d931c81 (HEAD -> master) Author: Thomas Lamprecht Date: Tue Oct 1 17:20:21 2019 +0200 followup: rename to print_snapshot_tree, add comment and rename $res param This was taken from a CLI helper, there $res is a common parameter name which denotes that it's the res from the API call the CLI command bases on. But here that makes no sense and is not really telling about what the value(s) of $res could be. Further explain that with a comment. As this also prints uncoditionally to STDOUT let's say so through the method name. Signed-off-by: Thomas Lamprecht diff --git a/PVE/GuestHelpers.pm b/PVE/GuestHelpers.pm index 64e26c9..35059e6 100644 --- a/PVE/GuestHelpers.pm +++ b/PVE/GuestHelpers.pm @@ -62,13 +62,15 @@ sub exec_hookscript { } } -sub snapshot_tree { -my ($res) = @_; +# takes a snapshot list (e.g., qm/pct snapshot_list API call result) and +# prints it out in a nice tree sorted by age. Can cope with multiple roots +sub print_snapshot_tree { +my ($snapshot_list) = @_; -my $snapshots = { map { $_->{name} => $_ } @$res }; +my $snapshots = { map { $_->{name} => $_ } @$snapshot_list }; my @roots; -foreach my $e (@$res) { +foreach my $e (@$snapshot_list) { my $parent; if (($parent = $e->{parent}) && defined $snapshots->{$parent}) { push @{$snapshots->{$parent}->{children}}, $e->{name}; -- ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH container 3/3] use snapshot_tree guest helper for pct listsnapshot
On 10/1/19 9:33 AM, Oguz Bektas wrote: > adds feature parity between qm/pct 'listsnapshot' w.r.t. showing > snapshot tree ordered by date. > > Signed-off-by: Oguz Bektas > --- > src/PVE/CLI/pct.pm | 11 +-- > 1 file changed, 1 insertion(+), 10 deletions(-) > > diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm > index 35ad72f..0dd694b 100755 > --- a/src/PVE/CLI/pct.pm > +++ b/src/PVE/CLI/pct.pm > @@ -861,16 +861,7 @@ our $cmddef = { > > delsnapshot => [ "PVE::API2::LXC::Snapshot", 'delsnapshot', ['vmid', > 'snapname'], { node => $nodename } , $upid_exit ], > > -listsnapshot => [ "PVE::API2::LXC::Snapshot", 'list', ['vmid'], { node > => $nodename }, > - sub { > - my $res = shift; > - foreach my $e (@$res) { > - my $headline = $e->{description} || > 'no-description'; > - $headline =~ s/\n.*//sg; > - my $parent = $e->{parent} // 'no-parent'; > - printf("%-20s %-20s %s\n", $e->{name}, $parent, > $headline); > - } > - }], > +listsnapshot => [ "PVE::API2::LXC::Snapshot", 'list', ['vmid'], { node > => $nodename },\&PVE::GuestHelpers::PVE::GuestHelpers::snapshot_tree ], missing space after "," and due to my followup a rename to print_snapshot_tree is required. > > rollback => [ "PVE::API2::LXC::Snapshot", 'rollback', ['vmid', > 'snapname'], { node => $nodename } , $upid_exit ], > > ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v2 manager 01/12] Broadcast supported CPU flags
On 9/30/19 12:58 PM, Stefan Reiter wrote: > pvestatd will check if the KVM version has changed using > kvm_user_version (which automatically clears its cache if QEMU/KVM > updates), and if it has, query supported CPU flags and broadcast them as > a key-value pair to the cluster. > > Detects value regressions and handles them gracefully (i.e. if > query_supported_cpu_flags fails, we try to retain the previously > detected flags). > > Signed-off-by: Stefan Reiter > --- > > v1 -> v2: > * broadcast directly in update_supported_cpuflags > * use kvm_user_version to determine when to re-query CPU flags > * don't broadcast flags when unchanged or empty > > > PVE/Service/pvestatd.pm | 40 ++-- > 1 file changed, 38 insertions(+), 2 deletions(-) > > diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm > index bad1b73d..5ac97c98 100755 > --- a/PVE/Service/pvestatd.pm > +++ b/PVE/Service/pvestatd.pm > @@ -78,6 +78,40 @@ sub hup { > $restart_request = 1; > } > > +my $last_kvm_version = ''; > + > +sub update_supported_cpuflags { > +my $kvm_version = PVE::QemuServer::kvm_user_version(); > + > +# only update when QEMU/KVM version has changed, as that is the only > reason > +# why flags could change without restarting pvestatd > +return if $last_kvm_version eq $kvm_version; > +$last_kvm_version = $kvm_version; > + > +my $supported_cpuflags; > + > +eval { > + $supported_cpuflags = join(" ", > @{PVE::QemuServer::query_supported_cpu_flags()}); > +}; could be written as: my $supported_cpuflags = eval { join(" ", @{PVE::QemuServer::query_supported_cpu_flags()}) }; > +warn $@ if $@; > + > +# detect regression > +if (!$supported_cpuflags || $supported_cpuflags eq '') { '' is already falsy in perl so this can be simplified to if (!$supported_cpuflags) { > + my $prev_cpuflags = PVE::Cluster::get_node_kv('cpuflags', > $nodename)->{$nodename}; does it really make sense to re-use the old ones?? Would it be better to have a sane fallback in query_supported_cpu_flags? > + if ($prev_cpuflags && $prev_cpuflags ne '') { same above > + warn "CPU flag detection failed, using old values\n"; > + } else { > + warn "CPU flag detection failed and no previous values found\n"; > + } > + > + # either flags are already in kv store, so no need to re-broadcast, > + # or we don't have valid flags, and no point broadcasting empty string > + return; nit: while the same `return undef;` could be a bit more explicit > +} > + > +PVE::Cluster::broadcast_node_kv('cpuflags', $supported_cpuflags); > +} > + > my $generate_rrd_string = sub { > my ($data) = @_; > > @@ -97,7 +131,9 @@ sub update_node_status { > > my $cpuinfo = PVE::ProcFSTools::read_cpuinfo(); > > -my $maxcpu = $cpuinfo->{cpus}; > +my $maxcpu = $cpuinfo->{cpus}; > + > +update_supported_cpuflags(); > > my $subinfo = PVE::INotify::read_file('subscription'); > my $sublevel = $subinfo->{level} || ''; > @@ -110,7 +146,7 @@ sub update_node_status { > $netin += $netdev->{$dev}->{receive}; > $netout += $netdev->{$dev}->{transmit}; > } > - > + > my $meminfo = PVE::ProcFSTools::read_meminfo(); > > my $dinfo = df('/', 1); # output is bytes > ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v2 qemu-server 02/12] Add QEMU CPU flag querying helpers
On 9/30/19 12:58 PM, Stefan Reiter wrote: > * query_understood_cpu_flags returns all flags that QEMU/KVM knows about > * query_supported_cpu_flags returns all flags that QEMU/KVM can use on > this particular host. > > To get supported flags, a temporary VM is started with QEMU, so we can > issue the "query-cpu-model-expansion" QMP command. This is how libvirt > queries supported flags for its "host-passthrough" CPU type. > query_supported_cpu_flags is thus rather slow and shouldn't be called > unnecessarily. > > Currently only supports x86_64, because QEMU-aarch64 doesn't provide the > necessary querying functions. > > Signed-off-by: Stefan Reiter > --- > > query_understood_cpu_flags is currently not used, but will be very useful for > the later UI part. I think it thematically fits best with this patch though. > > v1 -> v2: > * Change order of functions and add a single, more useful comment on usage > * Do s/\.|-/_/g directly in query_understood_cpu_flags, since the other format > is useless anyway > * Add "die" and FIXME for aarch64, since it doesn't support querying atm > (still, use get_host_arch()/get_basic_machine_info() for now, so once QEMU > supports it, we theoretically just have to remove the "die") > * Do QMP in extra eval, so we don't die before calling vm_stop > > > PVE/QemuServer.pm | 112 ++ > 1 file changed, 112 insertions(+) > > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm > index 70ed910..20c1061 100644 > --- a/PVE/QemuServer.pm > +++ b/PVE/QemuServer.pm > @@ -3531,6 +3531,118 @@ sub get_command_for_arch($) { > return $cmd; > } > > +# To use query_supported_cpu_flags and query_understood_cpu_flags to get > flags > +# to use in a QEMU command line (-cpu element), first array_intersect the > result > +# of query_supported_ with query_understood_. This is necessary because: > +# > +# a) query_understood_ returns flags the host cannot use and > +# b) query_supported_ (rather the QMP call) doesn't actually return CPU > +#flags, but CPU settings - with most of them being flags. Those settings > +#(and some flags, curiously) cannot be specified as a "-cpu" argument > +#however. > +# > +# query_supported_ needs to start a temporary VM and is therefore rather > +# expensive. If you need the value returned from this, you can get it much > +# cheaper from pmxcfs using PVE::Cluster::get_node_kv('cpuflags'). mentioning that pvestatd broadcasts this in an intelligent way would be nice above, else one may wonder how/why that can/should be used. > +# > +sub query_supported_cpu_flags { > +my $flags = []; > + > +my $vmid = -1; maybe 's/vmid/fakevmid/' ? or tempvmid, virtvmid ? do underline that it's intended that this cannot exists. > +my $pidfile = pidfile_name($vmid); > + > +my ($arch, $default_machine) = get_basic_machine_info(); > + > +# FIXME: Once this is merged, the code below should work for ARM without > any > +# further modifications: > +# https://lists.nongnu.org/archive/html/qemu-devel/2019-06/msg04947.html > +die "QEMU/KVM cannot detect CPU flags on ARM (aarch64)\n" if > + $arch eq "aarch64"; > + > +PVE::QemuConfig->lock_config($vmid, sub { > + # We start a temporary (frozen) VM with vmid -1 to allow us to send a > QMP command > + my $rc = run_command([ > + get_command_for_arch($arch), > + '-machine', $default_machine, > + '-display', 'none', > + '-chardev', > "socket,id=qmp,path=/var/run/qemu-server/$vmid.qmp,server,nowait", > + '-mon', 'chardev=qmp,mode=control', > + '-pidfile', $pidfile, > + '-S', '-daemonize' > + ], noerr => 1, quiet => 0); > + return if $rc; > + > + eval { > + my $cmd_result = vm_mon_cmd_nocheck( > + $vmid, > + 'query-cpu-model-expansion', > + type => 'full', > + model => { name => 'host' } > + ); > + > + my $props = $cmd_result->{model}->{props}; > + if (%$props) { AFAICT, above is not really required, the "for my $prop (%$foo) { } can handle an empty %$foo > + foreach my $prop (keys %$props) { > + push @$flags, $prop if "$props->{$prop}" eq '1'; > + } > + } > + }; > + my $err = $@; > + > + # force stop with 10 sec timeout and 'nocheck' > + # always stop, even if QMP failed > + vm_stop(undef, $vmid, 1, 1, 10, 0, 1); > + > + die $err if $err; > +}); > + > +# QEMU returns some flags multiple times, with '_', '.' or '-' as > separator > +# (e.g. lahf_lm and lahf-lm; sse4.2, sse4-2 and sse4_2; ...). > +# We only keep those with underscores, since they match the ones from > +# /proc/cpuinfo (they do the same thing, but we get rid of duplicates). > +@$flags = grep { > + my $replaced = (my $underscore = $_) =~ s/\.|-/_/g; > + !($replaced && grep { $_ eq $underscore } @$flags) > +
Re: [pve-devel] [PATCH v2 qemu-server 03/12] Add CPUConfig file and migrate some CPU helpers
On 9/30/19 12:58 PM, Stefan Reiter wrote: > The package will be used for custom CPU models as a SectionConfig, hence > the name. For now we simply move some CPU related helper functions and > declarations over from QemuServer to reduce clutter there. Single thing I'm not too sure is how "qemu_machine_feature_enabled" fit's into the CPUConfig here.. AFAICT, this rather looks like and attempt to avoid a cyclic module dependency? Maybe we could also do a Machine module which then includes stuff like: machine_type_is_q35 get_basic_machine_info But seems not really worth it... better ideas? > > Signed-off-by: Stefan Reiter > --- > PVE/QemuServer.pm | 242 +- > PVE/QemuServer/CPUConfig.pm | 251 > PVE/QemuServer/Makefile | 1 + > 3 files changed, 256 insertions(+), 238 deletions(-) > create mode 100644 PVE/QemuServer/CPUConfig.pm > > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm > index 20c1061..a95006f 100644 > --- a/PVE/QemuServer.pm > +++ b/PVE/QemuServer.pm > @@ -33,6 +33,7 @@ use PVE::QemuConfig; > use PVE::QMPClient; > use PVE::RPCEnvironment; > use PVE::GuestHelpers; > +use PVE::QemuServer::CPUConfig qw(qemu_machine_feature_enabled > print_cpu_device get_cpu_options); > use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr > print_pcie_root_port); > use PVE::QemuServer::Memory; > use PVE::QemuServer::USB qw(parse_usb_device); > @@ -116,105 +117,6 @@ mkdir $var_run_tmpdir; > my $lock_dir = "/var/lock/qemu-server"; > mkdir $lock_dir; > > -my $cpu_vendor_list = { > -# Intel CPUs > -486 => 'GenuineIntel', > -pentium => 'GenuineIntel', > -pentium2 => 'GenuineIntel', > -pentium3 => 'GenuineIntel', > -coreduo => 'GenuineIntel', > -core2duo => 'GenuineIntel', > -Conroe => 'GenuineIntel', > -Penryn => 'GenuineIntel', > -Nehalem => 'GenuineIntel', > -'Nehalem-IBRS' => 'GenuineIntel', > -Westmere => 'GenuineIntel', > -'Westmere-IBRS' => 'GenuineIntel', > -SandyBridge => 'GenuineIntel', > -'SandyBridge-IBRS' => 'GenuineIntel', > -IvyBridge => 'GenuineIntel', > -'IvyBridge-IBRS' => 'GenuineIntel', > -Haswell => 'GenuineIntel', > -'Haswell-IBRS' => 'GenuineIntel', > -'Haswell-noTSX' => 'GenuineIntel', > -'Haswell-noTSX-IBRS' => 'GenuineIntel', > -Broadwell => 'GenuineIntel', > -'Broadwell-IBRS' => 'GenuineIntel', > -'Broadwell-noTSX' => 'GenuineIntel', > -'Broadwell-noTSX-IBRS' => 'GenuineIntel', > -'Skylake-Client' => 'GenuineIntel', > -'Skylake-Client-IBRS' => 'GenuineIntel', > -'Skylake-Server' => 'GenuineIntel', > -'Skylake-Server-IBRS' => 'GenuineIntel', > - > -# AMD CPUs > -athlon => 'AuthenticAMD', > -phenom => 'AuthenticAMD', > -Opteron_G1 => 'AuthenticAMD', > -Opteron_G2 => 'AuthenticAMD', > -Opteron_G3 => 'AuthenticAMD', > -Opteron_G4 => 'AuthenticAMD', > -Opteron_G5 => 'AuthenticAMD', > -EPYC => 'AuthenticAMD', > -'EPYC-IBPB' => 'AuthenticAMD', > - > -# generic types, use vendor from host node > -host => 'default', > -kvm32 => 'default', > -kvm64 => 'default', > -qemu32 => 'default', > -qemu64 => 'default', > -max => 'default', > -}; > - > -my @supported_cpu_flags = ( > -'pcid', > -'spec-ctrl', > -'ibpb', > -'ssbd', > -'virt-ssbd', > -'amd-ssbd', > -'amd-no-ssb', > -'pdpe1gb', > -'md-clear', > -'hv-tlbflush', > -'hv-evmcs', > -'aes' > -); > -my $cpu_flag = qr/[+-](@{[join('|', @supported_cpu_flags)]})/; > - > -my $cpu_fmt = { > -cputype => { > - description => "Emulated CPU type.", > - type => 'string', > - enum => [ sort { "\L$a" cmp "\L$b" } keys %$cpu_vendor_list ], > - default => 'kvm64', > - default_key => 1, > -}, > -hidden => { > - description => "Do not identify as a KVM virtual machine.", > - type => 'boolean', > - optional => 1, > - default => 0 > -}, > -'hv-vendor-id' => { > - type => 'string', > - pattern => qr/[a-zA-Z0-9]{1,12}/, > - format_description => 'vendor-id', > - description => 'The Hyper-V vendor ID. Some drivers or programs inside > Windows guests need a specific ID.', > - optional => 1, > -}, > -flags => { > - description => "List of additional CPU flags separated by ';'." > - . " Use '+FLAG' to enable, '-FLAG' to disable a flag." > - . " Currently supported flags: @{[join(', ', > @supported_cpu_flags)]}.", > - format_description => '+FLAG[;-FLAG...]', > - type => 'string', > - pattern => qr/$cpu_flag(;$cpu_flag)*/, > - optional => 1, > -}, > -}; > - > my $watchdog_fmt = { > model => { > default_key => 1, > @@ -603,7 +505,7 @@ EODESCR > optional => 1, > description => "Emulated CPU type.", > type => 'string', > - format => $cpu_fmt, > + format => $PVE:
Re: [pve-devel] [PATCH v2 qemu-server 10/12] cfg2cmd: fix tests for new CPU flag resolving
On 9/30/19 12:58 PM, Stefan Reiter wrote: > The new flag resolving outputs flags in sorted order for consistency, > adapt the test cases to not break. > > Only the order is changed, not which flags are present. I'd like to have this change paired with the commit which triggers it, to not break building/tests in between. > > Signed-off-by: Stefan Reiter > --- > test/cfg2cmd/i440fx-win10-hostpci.conf.cmd | 2 +- > test/cfg2cmd/minimal-defaults.conf.cmd | 2 +- > test/cfg2cmd/q35-linux-hostpci.conf.cmd| 2 +- > test/cfg2cmd/q35-win10-hostpci.conf.cmd| 2 +- > test/cfg2cmd/simple1.conf.cmd | 2 +- > test/cfg2cmd/spice-usb3.conf.cmd | 2 +- > 6 files changed, 6 insertions(+), 6 deletions(-) > > diff --git a/test/cfg2cmd/i440fx-win10-hostpci.conf.cmd > b/test/cfg2cmd/i440fx-win10-hostpci.conf.cmd > index ff5d635..2a9174d 100644 > --- a/test/cfg2cmd/i440fx-win10-hostpci.conf.cmd > +++ b/test/cfg2cmd/i440fx-win10-hostpci.conf.cmd > @@ -15,7 +15,7 @@ >-boot > 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' > \ >-vnc unix:/var/run/qemu-server/8006.vnc,password \ >-no-hpet \ > - -cpu > 'kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_ipi,enforce' > \ > + -cpu > 'kvm64,enforce,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep' > \ >-m 512 \ >-object 'memory-backend-ram,id=ram-node0,size=256M' \ >-numa 'node,nodeid=0,cpus=0,memdev=ram-node0' \ > diff --git a/test/cfg2cmd/minimal-defaults.conf.cmd > b/test/cfg2cmd/minimal-defaults.conf.cmd > index 5abebe9..444050b 100644 > --- a/test/cfg2cmd/minimal-defaults.conf.cmd > +++ b/test/cfg2cmd/minimal-defaults.conf.cmd > @@ -12,7 +12,7 @@ >-nodefaults \ >-boot > 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' > \ >-vnc unix:/var/run/qemu-server/8006.vnc,password \ > - -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce \ > + -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \ >-m 512 \ >-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \ >-device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \ > diff --git a/test/cfg2cmd/q35-linux-hostpci.conf.cmd > b/test/cfg2cmd/q35-linux-hostpci.conf.cmd > index 21fb18b..2072295 100644 > --- a/test/cfg2cmd/q35-linux-hostpci.conf.cmd > +++ b/test/cfg2cmd/q35-linux-hostpci.conf.cmd > @@ -14,7 +14,7 @@ >-nodefaults \ >-boot > 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' > \ >-vnc unix:/var/run/qemu-server/8006.vnc,password \ > - -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce \ > + -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \ >-m 512 \ >-object 'memory-backend-ram,id=ram-node0,size=256M' \ >-numa 'node,nodeid=0,cpus=0,memdev=ram-node0' \ > diff --git a/test/cfg2cmd/q35-win10-hostpci.conf.cmd > b/test/cfg2cmd/q35-win10-hostpci.conf.cmd > index f2c08ca..81e43d4 100644 > --- a/test/cfg2cmd/q35-win10-hostpci.conf.cmd > +++ b/test/cfg2cmd/q35-win10-hostpci.conf.cmd > @@ -15,7 +15,7 @@ >-boot > 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' > \ >-vnc unix:/var/run/qemu-server/8006.vnc,password \ >-no-hpet \ > - -cpu > 'kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_ipi,enforce' > \ > + -cpu > 'kvm64,enforce,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep' > \ >-m 512 \ >-object 'memory-backend-ram,id=ram-node0,size=256M' \ >-numa 'node,nodeid=0,cpus=0,memdev=ram-node0' \ > diff --git a/test/cfg2cmd/simple1.conf.cmd b/test/cfg2cmd/simple1.conf.cmd > index b5c06cf..3485064 100644 > --- a/test/cfg2cmd/simple1.conf.cmd > +++ b/test/cfg2cmd/simple1.conf.cmd > @@ -12,7 +12,7 @@ >-nodefaults \ >-boot > 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' > \ >-vnc unix:/var/run/qemu-server/8006.vnc,password \ > - -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce \ > + -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \ >-m 768 \ >-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \ >-device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \ > diff --git a/test/cfg2cmd/spice-usb3.conf.cmd > b/test/cfg2cmd/spice-usb3.conf.cmd > index 680fa64..66b4e8d 100644 > --- a/test/cfg2cmd/spice-usb3.conf.cmd > +++ b/test/cfg2cmd/spice-usb3.conf.cmd > @@ -12,7 +12,7 @@ >-nodefaults \ >-boot > 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' > \ >-vnc unix:
Re: [pve-devel] [PATCH v2 ha-manager 07/12] Add stop command to the API
On 10/1/19 9:28 AM, Fabian Ebner wrote: > On 9/30/19 5:13 PM, Thomas Lamprecht wrote: >> On 9/30/19 9:22 AM, Fabian Ebner wrote: >>> Signed-off-by: Fabian Ebner >>> --- >>> src/PVE/API2/HA/Resources.pm | 37 >>> src/PVE/CLI/ha_manager.pm | 2 ++ >>> 2 files changed, 39 insertions(+) >> besides the nit^Wfact that the commit subject should read >> "Add stop command to the API and CLI", not sure if this is required? >> >> Couldn't we just have a common helper somewhere and then the >> qm stop or pct stop just calls that (indirectly?). >> >> Just need to rethink concepts for this one, code itself looks OK. > > Isn't the separation between the Qemu/LXC code and HA manager better like > this? > > It seems like the calls from inside Qemu/LXC code that handle services use > the 'ha-manager' command and so I thought it would be natural to use it here > as well. Yeah, there you're right.. Just not 100% happy with the duality of doing things (request state stop via 'set ' or through this) Maybe you could keep it but not expose it thorugh API? At least initially. You could do a ha-manager CLI internal implementation like "status" is. Then we do not need to keep the API compatible and are a bit more flexible if we get a better idea. Maybe do it as subcommand so that I can call # ha-manager crm-command stop migrate/relocate could also moved there (and the old ones aliased to the new ones for backwards compat.) What do you think? > > Where would a good place to put the helper be? > >>> diff --git a/src/PVE/API2/HA/Resources.pm b/src/PVE/API2/HA/Resources.pm >>> index 2b62ee8..ecc5f0c 100644 >>> --- a/src/PVE/API2/HA/Resources.pm >>> +++ b/src/PVE/API2/HA/Resources.pm >>> @@ -353,4 +353,41 @@ __PACKAGE__->register_method ({ >>> return undef; >>> }}); >>> +__PACKAGE__->register_method ({ >>> + name => 'stop', >>> + protected => 1, >>> + path => '{sid}/stop', >>> + method => 'POST', >>> + description => "Request the service to be stopped.", >>> + permissions => { >>> + check => ['perm', '/', [ 'Sys.Console' ]], >>> + }, >>> + parameters => { >>> + additionalProperties => 0, >>> + properties => { >>> + sid => get_standard_option('pve-ha-resource-or-vm-id', >>> + { completion => \&PVE::HA::Tools::complete_sid }), >>> + timeout => { >>> + description => "Timeout in seconds. If set to 0 a hard stop will >>> be performed.", >>> + type => 'integer', >>> + minimum => 0, >>> + optional => 1, >>> + }, >>> + }, >>> + }, >>> + returns => { type => 'null' }, >>> + code => sub { >>> + my ($param) = @_; >>> + >>> + my ($sid, $type, $name) = >>> PVE::HA::Config::parse_sid(extract_param($param, 'sid')); >>> + >>> + PVE::HA::Config::service_is_ha_managed($sid); >>> + >>> + check_service_state($sid); >>> + >>> + PVE::HA::Config::queue_crm_commands("stop $sid $param->{timeout}"); >>> + >>> + return undef; >>> + }}); >>> + >>> 1; >>> diff --git a/src/PVE/CLI/ha_manager.pm b/src/PVE/CLI/ha_manager.pm >>> index 5ce4c30..c9d4e7f 100644 >>> --- a/src/PVE/CLI/ha_manager.pm >>> +++ b/src/PVE/CLI/ha_manager.pm >>> @@ -114,6 +114,8 @@ our $cmddef = { >>> migrate => [ "PVE::API2::HA::Resources", 'migrate', ['sid', 'node'] ], >>> relocate => [ "PVE::API2::HA::Resources", 'relocate', ['sid', 'node'] >>> ], >>> + stop => [ "PVE::API2::HA::Resources", 'stop', ['sid', 'timeout'] ], >>> + >>> }; >>> 1; >>> ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH zfsonlinux] followup: fpu struct initialized member was removed with 5.2
That's why it was guarded with the "HAVE_KERNEL_FPU_INITIALIZED" defined in the first place.. Signed-off-by: Thomas Lamprecht --- ...r-save-restore-is-also-required-on-5.patch | 26 ++- 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/debian/patches/0008-SIMD-FPU-register-save-restore-is-also-required-on-5.patch b/debian/patches/0008-SIMD-FPU-register-save-restore-is-also-required-on-5.patch index db0c8a3..bd60e80 100644 --- a/debian/patches/0008-SIMD-FPU-register-save-restore-is-also-required-on-5.patch +++ b/debian/patches/0008-SIMD-FPU-register-save-restore-is-also-required-on-5.patch @@ -5,16 +5,16 @@ Subject: [PATCH] [SIMD]: FPU register save/restore is also required on 5.0 kernels NOTE: the kernel needs to have the copy_kernel_to_xregs_err, -copy_kernel_to_fxregs_err and copy_kernel_to_fregs_err funcitons +copy_kernel_to_fxregs_err and copy_kernel_to_fregs_err functions backported for this to work. Signed-off-by: Thomas Lamprecht --- - include/linux/simd_x86.h | 11 +-- - 1 file changed, 1 insertion(+), 10 deletions(-) + include/linux/simd_x86.h | 11 --- + 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/include/linux/simd_x86.h b/include/linux/simd_x86.h -index edd456098..13aa77345 100644 +index edd456098..98503a29e 100644 --- a/include/linux/simd_x86.h +++ b/include/linux/simd_x86.h @@ -181,7 +181,6 @@ kfpu_begin(void) @@ -25,21 +25,23 @@ index edd456098..13aa77345 100644 /* * The current FPU registers need to be preserved by kfpu_begin() * and restored by kfpu_end(). This is required because we can -@@ -190,20 +189,13 @@ kfpu_begin(void) +@@ -190,11 +189,11 @@ kfpu_begin(void) * context switch. */ copy_fpregs_to_fpstate(¤t->thread.fpu); -#elif defined(HAVE_KERNEL_FPU_INITIALIZED) -- /* ++ ++ ++#if defined(HAVE_KERNEL_FPU_INITIALIZED) + /* - * There is no need to preserve and restore the FPU registers. - * They will always be restored from the task's stored FPU state - * when switching contexts. -- */ -+ ++ * Was removed with 5.2 as it was always set to 1 there +*/ WARN_ON_ONCE(current->thread.fpu.initialized == 0); --#endif - } - + #endif +@@ -203,7 +202,6 @@ kfpu_begin(void) static inline void kfpu_end(void) { @@ -47,7 +49,7 @@ index edd456098..13aa77345 100644 union fpregs_state *state = ¤t->thread.fpu.state; int error; -@@ -215,7 +207,6 @@ kfpu_end(void) +@@ -215,7 +213,6 @@ kfpu_end(void) error = copy_kernel_to_fregs_err(&state->fsave); } WARN_ON_ONCE(error); -- 2.20.1 ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v2 qemu-server 02/12] Add QEMU CPU flag querying helpers
On 9/30/19 12:58 PM, Stefan Reiter wrote: > * query_understood_cpu_flags returns all flags that QEMU/KVM knows about Why isn't that information generated and pve-qemu built-time and shipped in /usr/share or the like? It won't change independently of qemu so could make sense? Swaps way more expensive run_commands by a simple file read. > * query_supported_cpu_flags returns all flags that QEMU/KVM can use on > this particular host. > > To get supported flags, a temporary VM is started with QEMU, so we can > issue the "query-cpu-model-expansion" QMP command. This is how libvirt > queries supported flags for its "host-passthrough" CPU type. > query_supported_cpu_flags is thus rather slow and shouldn't be called > unnecessarily. > > Currently only supports x86_64, because QEMU-aarch64 doesn't provide the > necessary querying functions. > > Signed-off-by: Stefan Reiter > --- > ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v2 qemu-server 04/12] Adapt CPUConfig to handle custom models
On 9/30/19 12:58 PM, Stefan Reiter wrote: > Turn CPUConfig into a SectionConfig with parsing/writing support for > custom CPU models. IO is handled using cfs. > > The "custom" parameter provides differentiation between custom and > default types, even if the name is the same ('namespacing'). > > Signed-off-by: Stefan Reiter > --- > PVE/QemuServer/CPUConfig.pm | 59 ++--- > 1 file changed, 55 insertions(+), 4 deletions(-) > > diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm > index c994228..3fde987 100644 > --- a/PVE/QemuServer/CPUConfig.pm > +++ b/PVE/QemuServer/CPUConfig.pm > @@ -3,6 +3,8 @@ package PVE::QemuServer::CPUConfig; > use strict; > use warnings; > use PVE::JSONSchema; > +use PVE::Cluster qw(cfs_register_file cfs_read_file); > +use base qw(PVE::SectionConfig); > > use base 'Exporter'; > our @EXPORT_OK = qw( > @@ -11,6 +13,15 @@ get_cpu_options > qemu_machine_feature_enabled > ); > > +my $default_filename = "cpu-models.conf"; > +cfs_register_file($default_filename, > + sub { PVE::QemuServer::CPUConfig->parse_config(@_); }, > + sub { PVE::QemuServer::CPUConfig->write_config(@_); }); > + > +sub load_custom_model_conf { > +return cfs_read_file($default_filename); > +} > + > my $cpu_vendor_list = { > # Intel CPUs > 486 => 'GenuineIntel', > @@ -80,11 +91,27 @@ my $cpu_flag = qr/[+-](@{[join('|', > @supported_cpu_flags)]})/; > > our $cpu_fmt = { > cputype => { > - description => "Emulated CPU type.", > + description => "Emulated CPU type. Can be default or custom name.", > type => 'string', > - enum => [ sort { "\L$a" cmp "\L$b" } keys %$cpu_vendor_list ], > + format_description => 'string', > default => 'kvm64', > default_key => 1, > + optional => 1, > +}, > +custom => { > + description => "True if 'cputype' is a custom model, false if > otherwise.", maybe negate it and call it built-in? We use that as name for Permission roles which are generated by code, we probably want to call this also built-in in the UI, and be it only to re-use translations ;) > + type => 'boolean', > + default => 0, > + optional => 1, > +}, > +'reported-model' => { > + description => "CPU model and vendor to report to the guest. Must be a > QEMU/KVM supported model." > + . " Only valid for custom CPU model definitions, default > models will always report themselves to the guest OS.", > + type => 'string', > + enum => [ sort { "\L$a" cmp "\L$b" } keys %$cpu_vendor_list ], we do not use \L at all in our code, AFAICT, just use the a bit more expressive, or at least already used all over the place, lc() > + default => 'kvm64', > + custom_only => 1, . ^^^ That's a fake JSONSchema attribute, we do not do those normally, either some mechanism gets added to JSONSchema to really represent this in general or just omit it. > + optional => 1, > }, > hidden => { > description => "Do not identify as a KVM virtual machine.", > @@ -102,14 +129,34 @@ our $cpu_fmt = { > flags => { > description => "List of additional CPU flags separated by ';'." >. " Use '+FLAG' to enable, '-FLAG' to disable a flag." > - . " Currently supported flags: @{[join(', ', > @supported_cpu_flags)]}.", > + . " Custom CPU models can specify any flag supported by" > + . " QEMU/KVM, VM-specific flags must be from the following" > + . " set for security reasons: @{[join(', ', > @supported_cpu_flags)]}.", is the last part still true if the pattern restriction to $cpu_flag are lifted below? > format_description => '+FLAG[;-FLAG...]', > type => 'string', > - pattern => qr/$cpu_flag(;$cpu_flag)*/, > + pattern => qr/[+-][a-zA-Z0-9\-_\.]+(;[+-][a-zA-Z0-9\-_\.]+)*/, > optional => 1, > }, > }; > > +# Section config settings > +my $defaultData = { > +# shallow copy, since SectionConfig modifies propertyList internally > +propertyList => { %$cpu_fmt }, > +}; > + > +sub private { > +return $defaultData; > +} > + > +sub options { > +return { %$cpu_fmt }; > +} > + > +sub type { > +return 'cpu-model'; > +} > + > # Print a QEMU device node for a given VM configuration for hotplugging CPUs > sub print_cpu_device { > my ($conf, $id) = @_; > @@ -248,4 +295,8 @@ sub qemu_machine_feature_enabled { >$current_minor >= $version_minor); > } > > + > +__PACKAGE__->register(); > +__PACKAGE__->init(); > + > 1; > ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v2 qemu-server 05/12] Add overrides and convenience functions to CPUConfig
On 9/30/19 12:58 PM, Stefan Reiter wrote: > Add two overrides to avoid writing redundant information to the config > file. > > get_model_by_name is used to return a cpu config with default values > filled out. > > Signed-off-by: Stefan Reiter > --- > PVE/QemuServer/CPUConfig.pm | 48 + > 1 file changed, 48 insertions(+) > > diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm > index 3fde987..125b636 100644 > --- a/PVE/QemuServer/CPUConfig.pm > +++ b/PVE/QemuServer/CPUConfig.pm > @@ -157,6 +157,54 @@ sub type { > return 'cpu-model'; > } > > +sub parse_section_header { > +my ($class, $line) = @_; > + > +my ($type, $sectionId, $errmsg, $config) = > + $class->SUPER::parse_section_header($line); > + > +return undef if !$type; > +return ($type, $sectionId, $errmsg, { > + # to avoid duplicate model name in config file, parse id as cputype > + cputype => $sectionId, > + # models parsed from file are by definition always custom > + custom => 1, > +}); > +} > + > +sub write_config { > +my ($class, $filename, $cfg) = @_; > + > +# remove redundant properties the comments from above parse_section_header are OK, but just looking at this one is a bit confusing.. maybe also add here also something like delete $cfg->{ids}->{$prop}->{custom}; # all models we write out are custom delete $cfg->{ids}->{$prop}->{cputype}; # saved in the section header also could it make sense to add asserts/checks that cputype == $prop and custom == 1 ? > +for my $prop (keys %{$cfg->{ids}}) { > + delete $cfg->{ids}->{$prop}->{custom}; > + delete $cfg->{ids}->{$prop}->{cputype}; > +} > + > +$class->SUPER::write_config($filename, $cfg); > +} > + > +# Use this to get a single model in the format described by $cpu_fmt. I'd add # fills in defaults from schema if value is not set in config > +# Returns undef for unknown $name. > +sub get_model_by_name { > +my ($conf, $name) = @_; > + > +return undef if !defined($conf->{ids}->{$name}); > + > +my $model = {}; > +for my $property (keys %{$defaultData->{propertyList}}) { > + next if $property eq 'type'; > + > + if (my $value = $conf->{ids}->{$name}->{$property}) { your hash access are a bit to long for my taste ^^ if it's used only once it can be OK, but else introduce some intermediate variables, especially if over 3 hash layer accesses at the start you could already do: my $entry = $conf->{ids}->{$name}; return undef if !defined($entry); and then use it here > + $model->{$property} = $value; > + } elsif (my $default = > $defaultData->{propertyList}->{$property}->{default}) { > + $model->{$property} = $default; > + } > +} > + > +return $model; > +} > + > # Print a QEMU device node for a given VM configuration for hotplugging CPUs > sub print_cpu_device { > my ($conf, $id) = @_; > ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v2 qemu-server 06/12] Verify VM-specific CPU configs seperately
On 9/30/19 12:58 PM, Stefan Reiter wrote: > $cpu_fmt is being reused for custom CPUs as well as VM-specific CPU > settings. The "pve-vm-cpu-conf" format is introduced to verify a config > specifically for use as VM-specific settings. > > Signed-off-by: Stefan Reiter > --- > PVE/QemuServer.pm | 2 +- > PVE/QemuServer/CPUConfig.pm | 68 +++-- > 2 files changed, 67 insertions(+), 3 deletions(-) > > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm > index a95006f..abf5578 100644 > --- a/PVE/QemuServer.pm > +++ b/PVE/QemuServer.pm > @@ -505,7 +505,7 @@ EODESCR > optional => 1, > description => "Emulated CPU type.", > type => 'string', > - format => $PVE::QemuServer::CPUConfig::cpu_fmt, > + format => 'pve-vm-cpu-conf', > }, > parent => get_standard_option('pve-snapshot-name', { > optional => 1, > diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm > index 125b636..61bd024 100644 > --- a/PVE/QemuServer/CPUConfig.pm > +++ b/PVE/QemuServer/CPUConfig.pm > @@ -87,9 +87,9 @@ my @supported_cpu_flags = ( > 'hv-evmcs', > 'aes' > ); > -my $cpu_flag = qr/[+-](@{[join('|', @supported_cpu_flags)]})/; > +my $cpu_flag_re = qr/([+-])(@{[join('|', @supported_cpu_flags)]})/; > > -our $cpu_fmt = { > +my $cpu_fmt = { > cputype => { > description => "Emulated CPU type. Can be default or custom name.", > type => 'string', > @@ -139,6 +139,70 @@ our $cpu_fmt = { > }, > }; > > +# $cpu_fmt describes both the CPU config passed as part of a VM config, as > well > +# as the definition of a custom CPU model. There are some slight differences > +# though, which we catch in the custom verification function below. > +sub verify_cpu_conf_basic { s/verify/parse/ ? > +my ($cpu, $noerr) = @_; > + > +eval { > +$cpu = PVE::JSONSchema::parse_property_string($cpu_fmt, $cpu); > +}; Two things: * I do not really like variable re-use, especially in such a weak typed and not to much "compiler" checked language as perl.. So please call the parameter $cpu_str or the like to make the difference clear. * I find it a bit easier to read: my $cpu = eval { PVE::JSONSchema::parse_property_string($cpu_fmt, $cpu_str) }; but that's rather personal taste, so see as nit ;) > +if ($@) { > +die $@ if !$noerr; > +return undef; > +} > + > +# required, but can't be made "optional => 0" since it's not included as > a > +# seperate property in config file > +if (!$cpu->{cputype}) { > + die "CPU is missing cputype\n" if !$noerr; > + return undef; > +} > + > +return $cpu; > +} > + > +PVE::JSONSchema::register_format('pve-vm-cpu-conf', \&verify_vm_cpu_conf); > +sub verify_vm_cpu_conf { > +my ($cpu, $noerr) = @_; > + > +$cpu = verify_cpu_conf_basic($cpu, $noerr); > +return undef if !$cpu; > + > +my $cputype = $cpu->{cputype}; > + > +# Model must exist > +if ($cpu->{custom}) { > + my $config = load_custom_model_conf(); > + return $cpu if $config && defined($config->{ids}->{$cputype}); > + > + die "Custom cputype '$cputype' not found\n" if !$noerr; > + return undef; > +} else { > + return $cpu if defined($cpu_vendor_list->{$cputype}); > + > + die "Default cputype '$cputype' not found\n" if !$noerr; > + return undef; > +} > + > +if ($cpu->{flags} && $cpu->{flags} !~ m/$cpu_flag_re(;$cpu_flag_re)*/) { > + die "VM-specific CPU flags must be a subset of: @{[join(', ', > @supported_cpu_flags)]}\n" > + if !$noerr; > + return undef; > +} > + > +for my $prop (keys %$cpu) { > + if ($cpu->{$prop}->{custom_only}) { > + die "Property '$prop' not allowed in VM-specific CPU config.\n" > + if !$noerr; > + return undef; > + } > +} > + > +return $cpu; > +} > + > # Section config settings > my $defaultData = { > # shallow copy, since SectionConfig modifies propertyList internally > ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v2 qemu-server 09/12] fix #2318: allow phys-bits and host-phys-bits CPU settings
On 9/30/19 12:58 PM, Stefan Reiter wrote: > Can be specified for a particular VM or via a custom CPU model (VM takes > precedence). Maybe add a hint for why this is needed at all here too, something like: > allows to boot a VM with more than 1TB of memory > ... ? else I've no idea what this is for without doing research myself.. > > Signed-off-by: Stefan Reiter > --- > PVE/QemuServer/CPUConfig.pm | 24 > 1 file changed, 24 insertions(+) > > diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm > index e595a69..f0bfeee 100644 > --- a/PVE/QemuServer/CPUConfig.pm > +++ b/PVE/QemuServer/CPUConfig.pm > @@ -137,6 +137,19 @@ my $cpu_fmt = { > pattern => qr/[+-][a-zA-Z0-9\-_\.]+(;[+-][a-zA-Z0-9\-_\.]+)*/, > optional => 1, > }, > +'phys-bits' => { > + type => 'integer', > + minimum => 1, does 1 really works or makes sense? maybe set a minimum which is a bit more relevant in practice like 8 or even 16.. > + maximum => 64, > + description => "The physical memory address bits that are reported to > the guest OS. Should be smaller or equal to the host's.", > + optional => 1, > +}, > +'host-phys-bits' => { > + type => 'boolean', > + default => 0, > + description => "Whether to report the host's physical memory address > bits. Overrides 'phys-bits' when set.", > + optional => 1, > +}, > }; > > # $cpu_fmt describes both the CPU config passed as part of a VM config, as > well > @@ -445,6 +458,17 @@ sub get_cpu_options { > $cpu .= resolve_cpu_flags($pve_flags, $hv_flags, $custom_cputype_flags, > $vm_flags, $pve_forced_flags); > > +my $phys_bits = ''; > +foreach my $conf ($custom_cpuconf, $cpuconf) { > + next if !defined($conf); > + if ($conf->{'host-phys-bits'}) { > + $phys_bits = ",host-phys-bits=true"; > + } elsif ($conf->{'phys-bits'}) { > + $phys_bits = ",phys-bits=$conf->{'phys-bits'}"; > + } > +} > +$cpu .= $phys_bits; > + > return ('-cpu', $cpu); > } > > ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH v2 qemu-server 12/12] cfg2cmd: fix descriptions of cfg2cmd test cases
On 9/30/19 12:58 PM, Stefan Reiter wrote: > Signed-off-by: Stefan Reiter > --- > > Independant from the rest of the series. > > test/cfg2cmd/i440fx-win10-hostpci.conf | 2 +- > test/cfg2cmd/q35-linux-hostpci.conf| 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/test/cfg2cmd/i440fx-win10-hostpci.conf > b/test/cfg2cmd/i440fx-win10-hostpci.conf > index 0df1378..2ab2dda 100644 > --- a/test/cfg2cmd/i440fx-win10-hostpci.conf > +++ b/test/cfg2cmd/i440fx-win10-hostpci.conf > @@ -1,4 +1,4 @@ > -# TEST: Config with q35, NUMA, hostpci passthrough, EFI & Windows > +# TEST: Config with i440fx, NUMA, hostpci passthrough, EFI & Windows > bios: ovmf > bootdisk: scsi0 > cores: 1 > diff --git a/test/cfg2cmd/q35-linux-hostpci.conf > b/test/cfg2cmd/q35-linux-hostpci.conf > index 8fd2193..749f983 100644 > --- a/test/cfg2cmd/q35-linux-hostpci.conf > +++ b/test/cfg2cmd/q35-linux-hostpci.conf > @@ -1,4 +1,4 @@ > -# TEST: Config with q35, NUMA, hostpci passthrough, EFI & Windows > +# TEST: Config with q35, NUMA, hostpci passthrough, EFI & Linux > bios: ovmf > bootdisk: scsi0 > cores: 1 > applied, for such patches which are irrelevant from the series and rather fixes in general please either do: * move it to the front of the series * send it as separate patch outside of a series thanks! ___ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel