[pve-devel] [PATCH manager] pve6to7: add check for 'lxc.cgroup.' keys in container config

2021-07-07 Thread Stoiko Ivanov
The check is rather straight forward - and might help users who
passthrough devices to their containers.

Reported in our community forum:
https://forum.proxmox.com/threads/pve-7-0-lxc-intel-quick-sync-passtrough-not-working-anymore.92025/

Signed-off-by: Stoiko Ivanov 
---
Tested quickly by pasting the lxc.cgroup.devices keys from the thread into a
container config.

PVE/CLI/pve6to7.pm | 25 +
 1 file changed, 25 insertions(+)

diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index 629d6935..a4a0bc67 100644
--- a/PVE/CLI/pve6to7.pm
+++ b/PVE/CLI/pve6to7.pm
@@ -995,6 +995,30 @@ sub check_containers_cgroup_compat {
 }
 };
 
+sub check_lxc_conf_keys {
+my $kernel_cli = PVE::Tools::file_get_contents('/proc/cmdline');
+if ($kernel_cli =~ /systemd.unified_cgroup_hierarchy=0/){
+   log_skip("System explicitly configured for legacy hybrid cgroup 
hierarchy.");
+   return;
+}
+
+log_info("Checking container configs for deprecated lxc.cgroup entries");
+
+my $affected_ct = [];
+my $cts = PVE::LXC::config_list();
+for my $vmid (sort { $a <=> $b } keys %$cts) {
+   my $lxc_raw_conf = PVE::LXC::Config->load_config($vmid)->{lxc};
+   push @$affected_ct, "CT $vmid"  if (grep (@$_[0] =~ /^lxc\.cgroup\./, 
@$lxc_raw_conf));
+}
+if (scalar($affected_ct->@*) > 0) {
+   log_warn("Config of the following containers contains 'lxc.cgroup' 
keys, which will be ".
+   "ignored in a unified cgroupv2 system:\n" .
+   join(", ", $affected_ct->@*));
+} else {
+   log_pass("No legacy 'lxc.cgroup' keys found.");
+}
+}
+
 sub check_misc {
 print_header("MISCELLANEOUS CHECKS");
 my $ssh_config = eval { PVE::Tools::file_get_contents('/root/.ssh/config') 
};
@@ -1090,6 +1114,7 @@ sub check_misc {
 check_custom_pool_roles();
 check_description_lengths();
 check_storage_content();
+check_lxc_conf_keys();
 }
 
 __PACKAGE__->register_method ({
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] ui: ceph/Status: fix recovery percentage display

2021-07-07 Thread Dominik Csapak
we incorrectly used 'total' as 100% of the to recovered objects here,
but that contains the total number of *bytes*.

rename 'toRecover' to better reflect its meaning and use that as
100% of the objects.

reported by a user:
https://forum.proxmox.com/threads/bug-ceph-recovery-bar-not-showing-percentage.91782/

Signed-off-by: Dominik Csapak 
---
 www/manager6/ceph/Status.js | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.js
index e92c698b..52563605 100644
--- a/www/manager6/ceph/Status.js
+++ b/www/manager6/ceph/Status.js
@@ -321,14 +321,14 @@ Ext.define('PVE.node.CephStatus', {
let unhealthy = degraded + unfound + misplaced;
// update recovery
if (pgmap.recovering_objects_per_sec !== undefined || unhealthy > 0) {
-   let toRecover = pgmap.misplaced_total || pgmap.unfound_total || 
pgmap.degraded_total || 0;
-   if (toRecover === 0) {
+   let totalRecovery = pgmap.misplaced_total || pgmap.unfound_total || 
pgmap.degraded_total || 0;
+   if (totalRecovery === 0) {
return; // FIXME: unexpected return and leaves things possible 
visible when it shouldn't?
}
-   let recovered = toRecover - unhealthy || 0;
+   let recovered = totalRecovery - unhealthy || 0;
let speed = pgmap.recovering_bytes_per_sec || 0;
 
-   let recoveryRatio = recovered / total;
+   let recoveryRatio = recovered / totalRecovery;
let txt = `${(recoveryRatio * 100).toFixed(2)}%`;
if (speed > 0) {
let obj_per_sec = speed / (4 * 1024 * 1024); // 4 MiB per Object
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] cluster resources: add cgroup-mode to node properties

2021-07-07 Thread Fabian Ebner
so the frontend has the information readily available.

Suggested-by: Thomas Lamprecht 
Signed-off-by: Fabian Ebner 
---
 PVE/API2/Cluster.pm | 12 
 PVE/Service/pvestatd.pm | 11 +++
 2 files changed, 23 insertions(+)

diff --git a/PVE/API2/Cluster.pm b/PVE/API2/Cluster.pm
index 3b918e55..11d6aa3a 100644
--- a/PVE/API2/Cluster.pm
+++ b/PVE/API2/Cluster.pm
@@ -306,6 +306,11 @@ __PACKAGE__->register_method({
type => 'string',
optional => 1,
},
+   'cgroup-mode' => {
+   description => "The cgroup mode the node operates under 
(when type == node).",
+   type => 'integer',
+   optional => 1,
+   },
},
},
 },
@@ -410,10 +415,17 @@ __PACKAGE__->register_method({
}
}
 
+   my $cgroup_modes = PVE::Cluster::get_node_kv("cgroup-mode");
+
if (!$param->{type} || $param->{type} eq 'node') {
foreach my $node (@$nodelist) {
my $can_audit = $rpcenv->check($authuser, "/nodes/$node", [ 
'Sys.Audit' ], 1);
my $entry = PVE::API2Tools::extract_node_stats($node, $members, 
$rrd, !$can_audit);
+
+   if (defined(my $mode = $cgroup_modes->{$node})) {
+   $entry->{'cgroup-mode'} = int($mode);
+   }
+
push @$res, $entry;
}
}
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index 7193388c..22173d63 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -122,6 +122,15 @@ my $generate_rrd_string = sub {
 return join(':', map { $_ // 'U' } @$data);
 };
 
+my $broadcast_cgroup_mode = sub {
+my $cgroup_mode = eval { PVE::CGroup::cgroup_mode(); };
+if (my $err = $@) {
+   syslog('err', "cgroup mode error: $err");
+}
+
+PVE::Cluster::broadcast_node_kv("cgroup-mode", $cgroup_mode);
+};
+
 sub update_node_status {
 my ($status_cfg) = @_;
 
@@ -151,6 +160,8 @@ sub update_node_status {
 # everything not free is considered to be used
 my $dused = $dinfo->{blocks} - $dinfo->{bfree};
 
+$broadcast_cgroup_mode->();
+
 my $ctime = time();
 
 my $data = $generate_rrd_string->(
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] Proxmox 7.0 beta to 7.0-8 - Now unable to start VM - cgroup isn't numeric?

2021-07-07 Thread Thomas Lamprecht
Hi,

On 07.07.21 11:44, Victor Hooi wrote:
> I recently upgraded from the Proxmox 7.0 beta, to the latest 7.0-8 release.
> 
> However, when I try to start a Windows VM that I created before, I now get
> the following error:
> 
> Argument "cgroup v1: 1024, cgroup v2: 100" isn't numeric in numeric ge (>=)
>> at /usr/share/perl5/PVE/QemuServer.pm line 5312.
>> TASK ERROR: start failed: org.freedesktop.DBus.Error.InvalidArgs: Value
>> specified in CPUWeight is out of range
> 
> 
> Did something change between the beta and the release? Is there any way to
> fix the above?

A qemu-server version with a regression made it to the pvetest repository just
a bit ago, fixed now with qemu-server 7.0-9 that super seeded it.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] ui: ceph/Status: fix recovery percentage display

2021-07-07 Thread Thomas Lamprecht
On 07.07.21 10:47, Dominik Csapak wrote:
> we incorrectly used 'total' as 100% of the to recovered objects here,
> but that contains the total number of *bytes*.
> 
> rename 'toRecover' to better reflect its meaning and use that as
> 100% of the objects.
> 
> reported by a user:
> https://forum.proxmox.com/threads/bug-ceph-recovery-bar-not-showing-percentage.91782/
> 

please note if this would need to be backported too.

> Signed-off-by: Dominik Csapak 
> ---
>  www/manager6/ceph/Status.js | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.js
> index e92c698b..52563605 100644
> --- a/www/manager6/ceph/Status.js
> +++ b/www/manager6/ceph/Status.js
> @@ -321,14 +321,14 @@ Ext.define('PVE.node.CephStatus', {
>   let unhealthy = degraded + unfound + misplaced;
>   // update recovery
>   if (pgmap.recovering_objects_per_sec !== undefined || unhealthy > 0) {
> - let toRecover = pgmap.misplaced_total || pgmap.unfound_total || 
> pgmap.degraded_total || 0;
> - if (toRecover === 0) {
> + let totalRecovery = pgmap.misplaced_total || pgmap.unfound_total || 
> pgmap.degraded_total || 0;

why change the variable name, `toRecover` was still OK? Or at least I do not see
any improvement in making it easier to understand with `totalRecovery` if byte 
vs.
objects where a issue of confusion why not addressing that by using 
`toRecoverObjects`
or the like

Also, why not adding those metrics up? If, misplaced and unfound do not have any
overlap, IIRC, so would def. make sense for those - for degraded I'm not so sure
about overlap with the other two from top of my head though.

> + if (totalRecovery === 0) {
>   return; // FIXME: unexpected return and leaves things possible 
> visible when it shouldn't?
>   }
> - let recovered = toRecover - unhealthy || 0;
> + let recovered = totalRecovery - unhealthy || 0;
>   let speed = pgmap.recovering_bytes_per_sec || 0;
>  
> - let recoveryRatio = recovered / total;
> + let recoveryRatio = recovered / totalRecovery;
>   let txt = `${(recoveryRatio * 100).toFixed(2)}%`;
>   if (speed > 0) {
>   let obj_per_sec = speed / (4 * 1024 * 1024); // 4 MiB per Object
> 



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager 1/2] pve6to7: storage content: skip scanning storage if shared

2021-07-07 Thread Fabian Ebner
Shared storages are not scanned for migration either, so they cannot
be problematic in this context. This could lead to false positives
where it actually is completely unproblematic:

https://forum.proxmox.com/threads/proxmox-ve-7-0-released.92007/post-401165

Signed-off-by: Fabian Ebner 
---
 PVE/CLI/pve6to7.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index 69ed6d2e..17da70e8 100644
--- a/PVE/CLI/pve6to7.pm
+++ b/PVE/CLI/pve6to7.pm
@@ -707,6 +707,7 @@ sub check_storage_content {
 for my $storeid (sort keys $storage_cfg->{ids}->%*) {
my $scfg = $storage_cfg->{ids}->{$storeid};
 
+   next if $scfg->{shared};
next if !PVE::Storage::storage_check_enabled($storage_cfg, $storeid, 
undef, 1);
 
my $valid_content = 
PVE::Storage::Plugin::valid_content_types($scfg->{type});
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager 2/2] pve6to7: storage content: ignore misconfigured unreferenced volumes

2021-07-07 Thread Fabian Ebner
If the same local storage is configured twice with content type
separation, migration in PVE 6 would lead to the volumes being
duplicated. As that would happen for every migration, such an issue
would likely be noticed already, and in PVE 7 such configuration is
not problematic for migration anymore. Also, misconfigured
unreferenced volumes are not an issue with respect to the upgrade
itself, just drop the check.

It's not necessary to scan storages with either 'images' or 'rootdir'
anymore, as only the log_info() remains.

Signed-off-by: Fabian Ebner 
---
 PVE/CLI/pve6to7.pm | 43 ++-
 1 file changed, 6 insertions(+), 37 deletions(-)

diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index 17da70e8..7d7b09d2 100644
--- a/PVE/CLI/pve6to7.pm
+++ b/PVE/CLI/pve6to7.pm
@@ -695,15 +695,11 @@ sub check_description_lengths {
 sub check_storage_content {
 log_info("Checking storage content type configuration..");
 
-my $found_referenced;
-my $found_unreferenced;
+my $found;
 my $pass = 1;
 
 my $storage_cfg = PVE::Storage::config();
 
-my $potentially_affected = {};
-my $referenced_volids = {};
-
 for my $storeid (sort keys $storage_cfg->{ids}->%*) {
my $scfg = $storage_cfg->{ids}->{$storeid};
 
@@ -718,7 +714,8 @@ sub check_storage_content {
delete $scfg->{content}->{none}; # scan for guest images below
}
 
-   next if $scfg->{content}->{images} && $scfg->{content}->{rootdir};
+   next if $scfg->{content}->{images};
+   next if $scfg->{content}->{rootdir};
 
# Skip 'iscsi(direct)' (and foreign plugins with potentially similiar 
behavior) with 'none',
# because that means "use LUNs directly" and vdisk_list() in PVE 6.x 
still lists those.
@@ -739,12 +736,8 @@ sub check_storage_content {
}
my @volids = map { $_->{volid} } $res->{$storeid}->@*;
 
-   for my $volid (@volids) {
-   $potentially_affected->{$volid} = 1;
-   }
-
my $number = scalar(@volids);
-   if ($number > 0 && !$scfg->{content}->{images} && 
!$scfg->{content}->{rootdir}) {
+   if ($number > 0) {
log_info("storage '$storeid' - neither content type 'images' nor 
'rootdir' configured"
.", but found $number guest volume(s)");
}
@@ -753,8 +746,6 @@ sub check_storage_content {
 my $check_volid = sub {
my ($volid, $vmid, $vmtype, $reference) = @_;
 
-   $referenced_volids->{$volid} = 1 if $reference ne 'unreferenced';
-
my $guesttext = $vmtype eq 'qemu' ? 'VM' : 'CT';
my $prefix = "$guesttext $vmid - volume '$volid' ($reference)";
 
@@ -777,19 +768,14 @@ sub check_storage_content {
}
 
if (!$scfg->{content}->{$vtype}) {
-   $found_referenced = 1 if $reference ne 'unreferenced';
-   $found_unreferenced = 1 if $reference eq 'unreferenced';
+   $found = 1;
$pass = 0;
log_warn("$prefix - storage does not have content type '$vtype' 
configured.");
}
 };
 
-my $guests = {};
-
 my $cts = PVE::LXC::config_list();
 for my $vmid (sort { $a <=> $b } keys %$cts) {
-   $guests->{$vmid} = 'lxc';
-
my $conf = PVE::LXC::Config->load_config($vmid);
 
my $volhash = {};
@@ -817,8 +803,6 @@ sub check_storage_content {
 
 my $vms = PVE::QemuServer::config_list();
 for my $vmid (sort { $a <=> $b } keys %$vms) {
-   $guests->{$vmid} = 'qemu';
-
my $conf = PVE::QemuConfig->load_config($vmid);
 
my $volhash = {};
@@ -849,26 +833,11 @@ sub check_storage_content {
}
 }
 
-if ($found_referenced) {
+if ($found) {
log_warn("Proxmox VE 7.0 enforces stricter content type checks. The 
guests above " .
"might not work until the storage configuration is fixed.");
 }
 
-for my $volid (sort keys $potentially_affected->%*) {
-   next if $referenced_volids->{$volid}; # already checked
-
-   my (undef, undef, $vmid) = PVE::Storage::parse_volname($storage_cfg, 
$volid);
-   my $vmtype = $guests->{$vmid};
-   next if !$vmtype;
-
-   $check_volid->($volid, $vmid, $vmtype, 'unreferenced');
-}
-
-if ($found_unreferenced) {
-   log_warn("When migrating, Proxmox VE 7.0 only scans storages with the 
appropriate " .
-   "content types for unreferenced guest volumes.");
-}
-
 if ($pass) {
log_pass("no problems found");
 }
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager] pve6to7: add check for 'lxc.cgroup.' keys in container config

2021-07-07 Thread Thomas Lamprecht
On 07.07.21 10:44, Stoiko Ivanov wrote:
> The check is rather straight forward - and might help users who
> passthrough devices to their containers.
> 
> Reported in our community forum:
> https://forum.proxmox.com/threads/pve-7-0-lxc-intel-quick-sync-passtrough-not-working-anymore.92025/
> 
> Signed-off-by: Stoiko Ivanov 
> ---
> Tested quickly by pasting the lxc.cgroup.devices keys from the thread into a
> container config.
> 
> PVE/CLI/pve6to7.pm | 25 +
>  1 file changed, 25 insertions(+)
> 
>

applied, thanks!

But I merged it into the (now renamed) note length check to avoid iterating and 
parsing
all CT config multiple times.



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] ui: ceph/Status: fix recovery percentage display

2021-07-07 Thread Dominik Csapak

On 7/7/21 12:19 PM, Thomas Lamprecht wrote:

On 07.07.21 10:47, Dominik Csapak wrote:

we incorrectly used 'total' as 100% of the to recovered objects here,
but that contains the total number of *bytes*.

rename 'toRecover' to better reflect its meaning and use that as
100% of the objects.

reported by a user:
https://forum.proxmox.com/threads/bug-ceph-recovery-bar-not-showing-percentage.91782/



please note if this would need to be backported too.


yes, i think this would be good to backport




Signed-off-by: Dominik Csapak 
---
  www/manager6/ceph/Status.js | 8 
  1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.js
index e92c698b..52563605 100644
--- a/www/manager6/ceph/Status.js
+++ b/www/manager6/ceph/Status.js
@@ -321,14 +321,14 @@ Ext.define('PVE.node.CephStatus', {
let unhealthy = degraded + unfound + misplaced;
// update recovery
if (pgmap.recovering_objects_per_sec !== undefined || unhealthy > 0) {
-   let toRecover = pgmap.misplaced_total || pgmap.unfound_total || 
pgmap.degraded_total || 0;
-   if (toRecover === 0) {
+   let totalRecovery = pgmap.misplaced_total || pgmap.unfound_total || 
pgmap.degraded_total || 0;


why change the variable name, `toRecover` was still OK? Or at least I do not see
any improvement in making it easier to understand with `totalRecovery` if byte 
vs.
objects where a issue of confusion why not addressing that by using 
`toRecoverObjects`
or the like
i read the code and thought 'toRecover' means objects that need 
recovery, but it is not. {misplaced,unfound,degraded}_total each contain

the total number of objects taking part in the recovery
(also the ones that are not unhealthy)

maybe 'totalRecoveryObjects' would make more sense ?



Also, why not adding those metrics up? If, misplaced and unfound do not have any
overlap, IIRC, so would def. make sense for those - for degraded I'm not so sure
about overlap with the other two from top of my head though.


they contain all the same number
src/mon/PGMap.cc:{467,482,498} pool_sum.stats.sum.num_object_copies

but are only given if the respective category has objects that need recovery




+   if (totalRecovery === 0) {
return; // FIXME: unexpected return and leaves things possible 
visible when it shouldn't?
}
-   let recovered = toRecover - unhealthy || 0;
+   let recovered = totalRecovery - unhealthy || 0;
let speed = pgmap.recovering_bytes_per_sec || 0;
  
-	let recoveryRatio = recovered / total;

+   let recoveryRatio = recovered / totalRecovery;
let txt = `${(recoveryRatio * 100).toFixed(2)}%`;
if (speed > 0) {
let obj_per_sec = speed / (4 * 1024 * 1024); // 4 MiB per Object






___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server] cfg2cmd: avoid io_uring with LVM and write{back, through} cache

2021-07-07 Thread Fabian Ebner
Reported in the community forum[0]. Also tried with LVM-thin, but it
doesn't seem to be affected.

See also 628937f53acde52f7257ca79f574c87a45f392e7 for the same fix for
krbd.

[0]: 
https://forum.proxmox.com/threads/after-upgrade-to-7-0-all-vms-dont-boot.92019/post-401017

Signed-off-by: Fabian Ebner 
---
 PVE/QemuServer.pm | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 8fc90e2..b0fe257 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1605,8 +1605,11 @@ sub print_drive_commandline_full {
 # io_uring with cache mode writeback or writethrough on krbd will hang...
 my $rbd_no_io_uring = $scfg && $scfg->{type} eq 'rbd' && $scfg->{krbd} && 
!$cache_direct;
 
+# io_uring with cache mode writeback or writethrough on LVM will hang...
+my $lvm_no_io_uring = $scfg && $scfg->{type} eq 'lvm' && !$cache_direct;
+
 if (!$drive->{aio}) {
-   if ($io_uring && !$rbd_no_io_uring) {
+   if ($io_uring && !$rbd_no_io_uring && !$lvm_no_io_uring) {
# io_uring supports all cache modes
$opts .= ",aio=io_uring";
} else {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] ui: ha/ressources: fix toggling edit button on selection

2021-07-07 Thread Aaron Lauterer
It needs to be a 'proxmoxButton' to get activated when selecting a HA
ressource. This was lost during the last code cleanup, commit a69e943.

Signed-off-by: Aaron Lauterer 
---
 www/manager6/ha/Resources.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/manager6/ha/Resources.js b/www/manager6/ha/Resources.js
index b13484c0..edfadde2 100644
--- a/www/manager6/ha/Resources.js
+++ b/www/manager6/ha/Resources.js
@@ -67,6 +67,7 @@ Ext.define('PVE.ha.ResourcesView', {
},
},
{
+   xtype: 'proxmoxButton',
text: gettext('Edit'),
disabled: true,
selModel: sm,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] fix #3490: show more pci devices by default

2021-07-07 Thread Dominik Csapak
we filtered out devices which belong into the 'Generic System Peripheral'
category, but this can contain actual useful pci devices
users want to pass through, so simply do not filter it by default.

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Hardware/PCI.pm | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/Hardware/PCI.pm b/PVE/API2/Hardware/PCI.pm
index b3375ab9..d9c5b37e 100644
--- a/PVE/API2/Hardware/PCI.pm
+++ b/PVE/API2/Hardware/PCI.pm
@@ -10,7 +10,7 @@ use PVE::SysFSTools;
 
 use base qw(PVE::RESTHandler);
 
-my $default_class_blacklist = "05;06;08;0b";
+my $default_class_blacklist = "05;06;0b";
 
 __PACKAGE__->register_method ({
 name => 'pciscan',
@@ -33,8 +33,7 @@ __PACKAGE__->register_method ({
optional => 1,
description => "A list of blacklisted PCI classes, which will ".
   "not be returned. Following are filtered by ".
-  "default: Memory Controller (05), Bridge (06), ".
-  "Generic System Peripheral (08) and ".
+  "default: Memory Controller (05), Bridge (06) 
and ".
   "Processor (0b).",
},
verbose => {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] ui: dc/AuthEditOpenId: remove unnecessary code

2021-07-07 Thread Dominik Csapak
we do not have a 'verify' field here, so the onGetValues override
falsely sent 'delete: verify' on every edit

while our api is ok with that, it's better to remove it

Signed-off-by: Dominik Csapak 
---
 www/manager6/dc/AuthEditOpenId.js | 13 -
 1 file changed, 13 deletions(-)

diff --git a/www/manager6/dc/AuthEditOpenId.js 
b/www/manager6/dc/AuthEditOpenId.js
index 7ebb9c8f..2dd60d1b 100644
--- a/www/manager6/dc/AuthEditOpenId.js
+++ b/www/manager6/dc/AuthEditOpenId.js
@@ -3,19 +3,6 @@ Ext.define('PVE.panel.OpenIDInputPanel', {
 xtype: 'pveAuthOpenIDPanel',
 mixins: ['Proxmox.Mixin.CBind'],
 
-onGetValues: function(values) {
-   let me = this;
-
-   if (!values.verify) {
-   if (!me.isCreate) {
-   Proxmox.Utils.assemble_field_data(values, { 'delete': 'verify' 
});
-   }
-   delete values.verify;
-   }
-
-   return me.callParent([values]);
-},
-
 columnT: [
{
xtype: 'textfield',
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager] ui: ha/ressources: fix toggling edit button on selection

2021-07-07 Thread Thomas Lamprecht
On 07.07.21 13:36, Aaron Lauterer wrote:
> It needs to be a 'proxmoxButton' to get activated when selecting a HA
> ressource. This was lost during the last code cleanup, commit a69e943.
> 
> Signed-off-by: Aaron Lauterer 
> ---
>  www/manager6/ha/Resources.js | 1 +
>  1 file changed, 1 insertion(+)
> 
>

applied to master and stable-6, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] ui: ceph/Status: fix recovery percentage display

2021-07-07 Thread Thomas Lamprecht
On 07.07.21 13:23, Dominik Csapak wrote:
> On 7/7/21 12:19 PM, Thomas Lamprecht wrote:
>> On 07.07.21 10:47, Dominik Csapak wrote:
>>> diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.js
>>> index e92c698b..52563605 100644
>>> --- a/www/manager6/ceph/Status.js
>>> +++ b/www/manager6/ceph/Status.js
>>> @@ -321,14 +321,14 @@ Ext.define('PVE.node.CephStatus', {
>>>   let unhealthy = degraded + unfound + misplaced;
>>>   // update recovery
>>>   if (pgmap.recovering_objects_per_sec !== undefined || unhealthy > 0) {
>>> -    let toRecover = pgmap.misplaced_total || pgmap.unfound_total || 
>>> pgmap.degraded_total || 0;
>>> -    if (toRecover === 0) {
>>> +    let totalRecovery = pgmap.misplaced_total || pgmap.unfound_total 
>>> || pgmap.degraded_total || 0;
>>
>> why change the variable name, `toRecover` was still OK? Or at least I do not 
>> see
>> any improvement in making it easier to understand with `totalRecovery` if 
>> byte vs.
>> objects where a issue of confusion why not addressing that by using 
>> `toRecoverObjects`
>> or the like
> i read the code and thought 'toRecover' means objects that need recovery, but 
> it is not. {misplaced,unfound,degraded}_total each contain
> the total number of objects taking part in the recovery
> (also the ones that are not unhealthy)
> 
> maybe 'totalRecoveryObjects' would make more sense ?

totalRecoveryObjects and toRecoverObjects are so similar that they do not really
convey the difference to me for the confusion you had for any other reader, for 
that
I'd rather add a short comment, those tend to be a bit more explicit for subtle 
stuff.

> 
>>
>> Also, why not adding those metrics up? If, misplaced and unfound do not have 
>> any
>> overlap, IIRC, so would def. make sense for those - for degraded I'm not so 
>> sure
>> about overlap with the other two from top of my head though.
> 
> they contain all the same number
> src/mon/PGMap.cc:{467,482,498} pool_sum.stats.sum.num_object_copies

ah yeah true, I remember now again. Do you also know where this is actually
set (computed).


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] ui: ceph/Status: fix recovery percentage display

2021-07-07 Thread Dominik Csapak

On 7/7/21 2:24 PM, Thomas Lamprecht wrote:

On 07.07.21 13:23, Dominik Csapak wrote:

On 7/7/21 12:19 PM, Thomas Lamprecht wrote:

On 07.07.21 10:47, Dominik Csapak wrote:

diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.js
index e92c698b..52563605 100644
--- a/www/manager6/ceph/Status.js
+++ b/www/manager6/ceph/Status.js
@@ -321,14 +321,14 @@ Ext.define('PVE.node.CephStatus', {
   let unhealthy = degraded + unfound + misplaced;
   // update recovery
   if (pgmap.recovering_objects_per_sec !== undefined || unhealthy > 0) {
-    let toRecover = pgmap.misplaced_total || pgmap.unfound_total || 
pgmap.degraded_total || 0;
-    if (toRecover === 0) {
+    let totalRecovery = pgmap.misplaced_total || pgmap.unfound_total || 
pgmap.degraded_total || 0;


why change the variable name, `toRecover` was still OK? Or at least I do not see
any improvement in making it easier to understand with `totalRecovery` if byte 
vs.
objects where a issue of confusion why not addressing that by using 
`toRecoverObjects`
or the like

i read the code and thought 'toRecover' means objects that need recovery, but 
it is not. {misplaced,unfound,degraded}_total each contain
the total number of objects taking part in the recovery
(also the ones that are not unhealthy)

maybe 'totalRecoveryObjects' would make more sense ?


totalRecoveryObjects and toRecoverObjects are so similar that they do not really
convey the difference to me for the confusion you had for any other reader, for 
that
I'd rather add a short comment, those tend to be a bit more explicit for subtle 
stuff.


ok i'll leave it at 'toRecover' and add a comment what it is in my v2






Also, why not adding those metrics up? If, misplaced and unfound do not have any
overlap, IIRC, so would def. make sense for those - for degraded I'm not so sure
about overlap with the other two from top of my head though.


they contain all the same number
src/mon/PGMap.cc:{467,482,498} pool_sum.stats.sum.num_object_copies


ah yeah true, I remember now again. Do you also know where this is actually
set (computed).



no sadly, i tried to check, but i am not so deep into ceph code right now


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] ui: ceph/Status: fix recovery percentage display

2021-07-07 Thread Thomas Lamprecht
On 07.07.21 14:30, Dominik Csapak wrote:
> On 7/7/21 2:24 PM, Thomas Lamprecht wrote:
>> On 07.07.21 13:23, Dominik Csapak wrote:
>>> On 7/7/21 12:19 PM, Thomas Lamprecht wrote:
 On 07.07.21 10:47, Dominik Csapak wrote:
> diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.js
> index e92c698b..52563605 100644
> --- a/www/manager6/ceph/Status.js
> +++ b/www/manager6/ceph/Status.js
> @@ -321,14 +321,14 @@ Ext.define('PVE.node.CephStatus', {
>    let unhealthy = degraded + unfound + misplaced;
>    // update recovery
>    if (pgmap.recovering_objects_per_sec !== undefined || unhealthy > 
> 0) {
> -    let toRecover = pgmap.misplaced_total || pgmap.unfound_total || 
> pgmap.degraded_total || 0;
> -    if (toRecover === 0) {
> +    let totalRecovery = pgmap.misplaced_total || pgmap.unfound_total 
> || pgmap.degraded_total || 0;

 why change the variable name, `toRecover` was still OK? Or at least I do 
 not see
 any improvement in making it easier to understand with `totalRecovery` if 
 byte vs.
 objects where a issue of confusion why not addressing that by using 
 `toRecoverObjects`
 or the like
>>> i read the code and thought 'toRecover' means objects that need recovery, 
>>> but it is not. {misplaced,unfound,degraded}_total each contain
>>> the total number of objects taking part in the recovery
>>> (also the ones that are not unhealthy)
>>>
>>> maybe 'totalRecoveryObjects' would make more sense ?
>>
>> totalRecoveryObjects and toRecoverObjects are so similar that they do not 
>> really
>> convey the difference to me for the confusion you had for any other reader, 
>> for that
>> I'd rather add a short comment, those tend to be a bit more explicit for 
>> subtle stuff.
> 
> ok i'll leave it at 'toRecover' and add a comment what it is in my v2

Adding objects is fine to me though, the basic unit, i.e., size vs. bytes here,
is something that can be encoded in the variable name for dynamic typed 
languages
like JS - but no hard feelings.

>>
>>>

 Also, why not adding those metrics up? If, misplaced and unfound do not 
 have any
 overlap, IIRC, so would def. make sense for those - for degraded I'm not 
 so sure
 about overlap with the other two from top of my head though.
>>>
>>> they contain all the same number
>>> src/mon/PGMap.cc:{467,482,498} pool_sum.stats.sum.num_object_copies
>>
>> ah yeah true, I remember now again. Do you also know where this is actually
>> set (computed).
>>
> 
> no sadly, i tried to check, but i am not so deep into ceph code right now

ok, thanks nonetheless.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox 7.0 beta to 7.0-8 - Now unable to start VM - cgroup isn't numeric?

2021-07-07 Thread Thomas Lamprecht
On 07.07.21 13:34, Victor Hooi wrote:
> Do you know roughly how long that will take to hit the repositories?

should have been already available at time I wrote my reply.

off-topic: We switched all mailing lists over from pve.proxmox.com to their own
host at lists.proxmox.com a bit ago, while mails to the old address still get
forwarded it can easily result in duplicate ones if one replies with "Reply All"
and does not checks for that, like I forgot in my first answer.
The new address would be pve-devel@lists.proxmox.com would be great if you could
use the new one in the future (same for pve-user list) - thx!

> 
> (I just did an apt update, and it doesn't seem to have picked up a new
> qemu-server version yet).
> 
> On Wed, Jul 7, 2021 at 8:10 PM Thomas Lamprecht 
> wrote:
> 
>> Hi,
>>
>> On 07.07.21 11:44, Victor Hooi wrote:
>>> I recently upgraded from the Proxmox 7.0 beta, to the latest 7.0-8
>> release.
>>>
>>> However, when I try to start a Windows VM that I created before, I now
>> get
>>> the following error:
>>>
>>> Argument "cgroup v1: 1024, cgroup v2: 100" isn't numeric in numeric ge
>> (>=)
 at /usr/share/perl5/PVE/QemuServer.pm line 5312.
 TASK ERROR: start failed: org.freedesktop.DBus.Error.InvalidArgs: Value
 specified in CPUWeight is out of range
>>>
>>>
>>> Did something change between the beta and the release? Is there any way
>> to
>>> fix the above?
>>
>> A qemu-server version with a regression made it to the pvetest repository
>> just
>> a bit ago, fixed now with qemu-server 7.0-9 that super seeded it.
>>
>>


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v2] ui: ceph/Status: fix recovery percentage display

2021-07-07 Thread Dominik Csapak
we incorrectly used 'total' as 100% of the to recovered objects here,
but that containst the total number of *bytes*.

rename 'toRecover' to better reflect that the unit is 'objects' and
use that as total

reported by a user:
https://forum.proxmox.com/threads/bug-ceph-recovery-bar-not-showing-percentage.91782/

Signed-off-by: Dominik Csapak 
---
would be good to backport to stable-6

 www/manager6/ceph/Status.js | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.js
index e92c698b..bdcf3f1b 100644
--- a/www/manager6/ceph/Status.js
+++ b/www/manager6/ceph/Status.js
@@ -321,14 +321,14 @@ Ext.define('PVE.node.CephStatus', {
let unhealthy = degraded + unfound + misplaced;
// update recovery
if (pgmap.recovering_objects_per_sec !== undefined || unhealthy > 0) {
-   let toRecover = pgmap.misplaced_total || pgmap.unfound_total || 
pgmap.degraded_total || 0;
-   if (toRecover === 0) {
+   let toRecoverObjects = pgmap.misplaced_total || pgmap.unfound_total 
|| pgmap.degraded_total || 0;
+   if (toRecoverObjects === 0) {
return; // FIXME: unexpected return and leaves things possible 
visible when it shouldn't?
}
-   let recovered = toRecover - unhealthy || 0;
+   let recovered = toRecoverObjects - unhealthy || 0;
let speed = pgmap.recovering_bytes_per_sec || 0;
 
-   let recoveryRatio = recovered / total;
+   let recoveryRatio = recovered / toRecoverObjects;
let txt = `${(recoveryRatio * 100).toFixed(2)}%`;
if (speed > 0) {
let obj_per_sec = speed / (4 * 1024 * 1024); // 4 MiB per Object
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] cfg2cmd: avoid io_uring with LVM and write{back, through} cache

2021-07-07 Thread Thomas Lamprecht
On 07.07.21 13:28, Fabian Ebner wrote:
> Reported in the community forum[0]. Also tried with LVM-thin, but it
> doesn't seem to be affected.
> 
> See also 628937f53acde52f7257ca79f574c87a45f392e7 for the same fix for
> krbd.
> 
> [0]: 
> https://forum.proxmox.com/threads/after-upgrade-to-7-0-all-vms-dont-boot.92019/post-401017
> 
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/QemuServer.pm | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] Proxmox 7.0 beta to 7.0-8 - Now unable to start VM - cgroup isn't numeric?

2021-07-07 Thread Victor Hooi
Hi,

I recently upgraded from the Proxmox 7.0 beta, to the latest 7.0-8 release.

However, when I try to start a Windows VM that I created before, I now get
the following error:

Argument "cgroup v1: 1024, cgroup v2: 100" isn't numeric in numeric ge (>=)
> at /usr/share/perl5/PVE/QemuServer.pm line 5312.
> TASK ERROR: start failed: org.freedesktop.DBus.Error.InvalidArgs: Value
> specified in CPUWeight is out of range


Did something change between the beta and the release? Is there any way to
fix the above?

Thanks,
Victor
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] Proxmox 7.0 beta to 7.0-8 - Now unable to start VM - cgroup isn't numeric?

2021-07-07 Thread Victor Hooi
Gotcha - thanks for the quick fix!

I am using the pvetest repository.

Do you know roughly how long that will take to hit the repositories?

(I just did an apt update, and it doesn't seem to have picked up a new
qemu-server version yet).

On Wed, Jul 7, 2021 at 8:10 PM Thomas Lamprecht 
wrote:

> Hi,
>
> On 07.07.21 11:44, Victor Hooi wrote:
> > I recently upgraded from the Proxmox 7.0 beta, to the latest 7.0-8
> release.
> >
> > However, when I try to start a Windows VM that I created before, I now
> get
> > the following error:
> >
> > Argument "cgroup v1: 1024, cgroup v2: 100" isn't numeric in numeric ge
> (>=)
> >> at /usr/share/perl5/PVE/QemuServer.pm line 5312.
> >> TASK ERROR: start failed: org.freedesktop.DBus.Error.InvalidArgs: Value
> >> specified in CPUWeight is out of range
> >
> >
> > Did something change between the beta and the release? Is there any way
> to
> > fix the above?
>
> A qemu-server version with a regression made it to the pvetest repository
> just
> a bit ago, fixed now with qemu-server 7.0-9 that super seeded it.
>
>
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager v2] ui: ceph/Status: fix recovery percentage display

2021-07-07 Thread Thomas Lamprecht
On 07.07.21 14:49, Dominik Csapak wrote:
> we incorrectly used 'total' as 100% of the to recovered objects here,
> but that containst the total number of *bytes*.
> 
> rename 'toRecover' to better reflect that the unit is 'objects' and
> use that as total
> 
> reported by a user:
> https://forum.proxmox.com/threads/bug-ceph-recovery-bar-not-showing-percentage.91782/
> 
> Signed-off-by: Dominik Csapak 
> ---
> would be good to backport to stable-6
> 
>  www/manager6/ceph/Status.js | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager] pve6to7: add check for Debian security repository

2021-07-07 Thread Thomas Lamprecht
On 06.07.21 14:31, Fabian Ebner wrote:
> since the pattern for the suite changed.
> 
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/CLI/pve6to7.pm | 71 ++
>  1 file changed, 71 insertions(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager] fix #3490: show more pci devices by default

2021-07-07 Thread Thomas Lamprecht
On 07.07.21 13:41, Dominik Csapak wrote:
> we filtered out devices which belong into the 'Generic System Peripheral'
> category, but this can contain actual useful pci devices
> users want to pass through, so simply do not filter it by default.
> 
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/API2/Hardware/PCI.pm | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH proxmox-archive-keyring] bump version to 2.0

2021-07-07 Thread Thomas Lamprecht
On 06.07.21 14:04, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler 
> ---
>  debian/changelog   |   6 ++
>  debian/proxmox-archive-keyring.install |   1 -
>  debian/proxmox-archive-keyring.maintscript |   1 +
>  debian/proxmox-release-stretch.gpg | Bin 1181 -> 0 bytes
>  4 files changed, 7 insertions(+), 1 deletion(-)
>  create mode 100644 debian/proxmox-archive-keyring.maintscript
>  delete mode 100644 debian/proxmox-release-stretch.gpg
> 

Acked-by: Thomas Lamprecht 
Reviewed-by: Thomas Lamprecht 

please go ahead and push that out, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-kernel-meta 1/5] proxmox-boot: ignore call to grub-install from grub maintscripts

2021-07-07 Thread Stoiko Ivanov
in certain cases the postinst script of grub-pc runs grub-install on
the disks it gets from debconf. Simply warn and exit with 0 if
grub-install is called by dpkg and from a grub related package

Signed-off-by: Stoiko Ivanov 
---
 bin/grub-install-wrapper | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/bin/grub-install-wrapper b/bin/grub-install-wrapper
index a61e984..35f03fa 100755
--- a/bin/grub-install-wrapper
+++ b/bin/grub-install-wrapper
@@ -4,6 +4,12 @@ set -e
 . /usr/share/pve-kernel-helper/scripts/functions
 
 if proxmox-boot-tool status --quiet; then
+   #detect when being called by dpkg (e.g. grub-pc.postinst
+   if [ -n "$DPKG_RUNNING_VERSION" ] && \
+   echo "$DPKG_MAINTSCRIPT_PACKAGE" | grep -sq "^grub-"; then
+   warn "This system is booted via proxmox-boot-tool, ignoring 
dpkg call to grub-install"
+   exit 0
+   fi
warn "grub-install is disabled because this system is booted via 
proxmox-boot-tool, if you really need to run it, run 
/usr/sbin/grub-install.real"
exit 1
 else
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-kernel-meta 0/5] proxmox-boot-tool improvements

2021-07-07 Thread Stoiko Ivanov
The following patchset addresses a few small issues reported during the PVE
7.0 beta and after the 7.0 stable release.

* patches 1+2 deal with grub-install being called during a distribution
  upgrade on some systems (I did not manage to get a VM installed with PVE
  6.4 to run into the issue)
* patch 3 addresses an issue where once someone removes pve-kernel-helper
  (without purging it) it becomes quite difficult to get it installed again
  (to remove pve-kernel-helper you need also to remove proxmox-ve, but as
  our forum shows [0] - this happens without the user noticing sometimes)
* patch 4+5 are a few improvements to the `p-b-t status` I consider
  worthwhile


Stoiko Ivanov (5):
  proxmox-boot: ignore call to grub-install from grub maintscripts
  proxmox-boot: divert call to grub-install to p-b-t init
  proxmox-boot: maintscript: change logic whether to add diversion
  proxmox-boot: print current boot mode with status output
  proxmox-boot: status: print present kernel versions

 bin/grub-install-wrapper | 27 +++
 bin/proxmox-boot-tool| 13 ++---
 debian/pve-kernel-helper.preinst |  2 +-
 3 files changed, 38 insertions(+), 4 deletions(-)

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-kernel-meta 4/5] proxmox-boot: print current boot mode with status output

2021-07-07 Thread Stoiko Ivanov
most support questions w.r.t. proxmox-boot-tool do have us
asking for `stat /sys/firmware/efi` output anyways

Signed-off-by: Stoiko Ivanov 
---
 bin/proxmox-boot-tool | 5 +
 1 file changed, 5 insertions(+)

diff --git a/bin/proxmox-boot-tool b/bin/proxmox-boot-tool
index 079fa26..1e984d6 100755
--- a/bin/proxmox-boot-tool
+++ b/bin/proxmox-boot-tool
@@ -381,6 +381,11 @@ status() {
exit 2
fi
if [ -z "$quiet" ]; then
+   if [ -d /sys/firmware/efi ]; then
+   echo "System currently booted with uefi"
+   else
+   echo "System currently booted with legacy bios"
+   fi
loop_esp_list _status_detail
fi
 }
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-kernel-meta 5/5] proxmox-boot: status: print present kernel versions

2021-07-07 Thread Stoiko Ivanov
gives a better overview in case the system was switched at one time
from uefi to legacy (or the other way around).

Signed-off-by: Stoiko Ivanov 
---
 bin/proxmox-boot-tool | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/bin/proxmox-boot-tool b/bin/proxmox-boot-tool
index 1e984d6..93760fb 100755
--- a/bin/proxmox-boot-tool
+++ b/bin/proxmox-boot-tool
@@ -353,16 +353,18 @@ _status_detail() {
 
result=""
if [ -f "${mountpoint}/$PMX_LOADER_CONF" ]; then
-   result="uefi"
if [ ! -d "${mountpoint}/$PMX_ESP_DIR" ]; then
warn "${path}/$PMX_ESP_DIR does not exist"
fi
+   versions_uefi=$(ls -1 ${mountpoint}/$PMX_ESP_DIR | awk '{printf 
(NR>1?", ":"") $0}')
+   result="uefi (versions: ${versions_uefi})"
fi
if [ -d "${mountpoint}/grub" ]; then
+   versions_grub=$(ls -1 ${mountpoint}/vmlinuz-* | awk '{ 
gsub(/.*\/vmlinuz-/, ""); printf (NR>1?", ":"") $0 }')
if [ -n "$result" ]; then
-   result="${result},grub"
+   result="${result}, grub (versions: ${versions_grub})"
else
-   result="grub"
+   result="grub (versions: ${versions_grub})"
fi
fi
echo "$curr_uuid is configured with: $result"
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-kernel-meta 2/5] proxmox-boot: divert call to grub-install to p-b-t init

2021-07-07 Thread Stoiko Ivanov
This way all ESPs (in case of a legacy booted system) get an
updated grub installation.

running only once between reboots (the markerfile is in /tmp) should
be enough. Sadly the environment does not provide a hint which version
grub is installed to.

Signed-off-by: Stoiko Ivanov 
---
 bin/grub-install-wrapper | 25 +++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/bin/grub-install-wrapper b/bin/grub-install-wrapper
index 35f03fa..2e70789 100755
--- a/bin/grub-install-wrapper
+++ b/bin/grub-install-wrapper
@@ -3,12 +3,33 @@ set -e
 
 . /usr/share/pve-kernel-helper/scripts/functions
 
+init_boot_disks() {
+   if ! (echo "${curr_uuid}" | grep -qE '[0-9a-fA-F]{4}-[0-9a-fA-F]{4}'); 
then
+   warn "WARN: ${curr_uuid} read from ${ESP_LIST} does not look 
like a VFAT-UUID - skipping"
+   return
+   fi
+
+   path="/dev/disk/by-uuid/$curr_uuid"
+   if [ ! -e "${path}" ]; then
+   warn "WARN: ${path} does not exist - clean '${ESP_LIST}'! - 
skipping"
+   return
+   fi
+   proxmox-boot-tool init "$path"
+}
+
 if proxmox-boot-tool status --quiet; then
#detect when being called by dpkg (e.g. grub-pc.postinst
if [ -n "$DPKG_RUNNING_VERSION" ] && \
echo "$DPKG_MAINTSCRIPT_PACKAGE" | grep -sq "^grub-"; then
-   warn "This system is booted via proxmox-boot-tool, ignoring 
dpkg call to grub-install"
-   exit 0
+   MARKER_FILE="/tmp/proxmox-boot-tool.dpkg.marker"
+   if [ ! -e "$MARKER_FILE" ]; then
+   warn "This system is booted via proxmox-boot-tool, running 
proxmox-boot-tool init for all configured bootdisks"
+   loop_esp_list init_boot_disks
+   touch "$MARKER_FILE"
+   exit 0
+   else
+   exit 0
+   fi
fi
warn "grub-install is disabled because this system is booted via 
proxmox-boot-tool, if you really need to run it, run 
/usr/sbin/grub-install.real"
exit 1
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-kernel-meta 3/5] proxmox-boot: maintscript: change logic whether to add diversion

2021-07-07 Thread Stoiko Ivanov
Deciding whether or not to add the diversion based on the version
alone fails quite hard in case pve-kernel-helper is in dpkg-state 'rc'
(removed not purged) as reported in our community forum[0]:
* removing pve-kernel-helper removes the diversion of grub-install
* if config-files are still present the preinst script gets called
  with the version of the config-files (the version that got removed)
* if the version was newer than 6.4-1~ then no diversion is added
* unpacking fails, because grub-install would be overwritten leaving
  pve-kernel-helper in state 'ic'

Explicitly checking whether the diversion is in place sounds like a
robust approach here.

downside: documentation on dpkg-divert in maintainer scripts [1] uses
the version approach.

[0] https://forum.proxmox.com/threads/pve-kernel-helper-wont-install.90029/
[1] https://www.debian.org/doc/debian-policy/ap-pkg-diversions.html

Signed-off-by: Stoiko Ivanov 
---
 debian/pve-kernel-helper.preinst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/debian/pve-kernel-helper.preinst b/debian/pve-kernel-helper.preinst
index 9ec726d..e2464c9 100644
--- a/debian/pve-kernel-helper.preinst
+++ b/debian/pve-kernel-helper.preinst
@@ -4,7 +4,7 @@ set -e
 
 case "$1" in
 install|upgrade)
-if dpkg --compare-versions "$2" lt "6.4-1~"; then
+if ! dpkg -S /usr/sbin/grub-install|grep -q 'diversion by 
pve-kernel-helper'; then
 dpkg-divert --package pve-kernel-helper --add --rename \
 --divert /usr/sbin/grub-install.real /usr/sbin/grub-install
 fi
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel