Re: [pve-devel] [PATCH qemu-server] api2: fix vmconfig_apply_pending errors handling

2021-07-06 Thread Fabian Grünbichler
On July 6, 2021 12:02 am, Alexandre Derumier wrote: > commit > https://git.proxmox.com/?p=qemu-server.git;a=commit;h=eb5e482ded9ae6aeb6575de9441b79b90a5de531 > > have introduced error handling for offline pending apply, > > - PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, >

Re: [pve-devel] [PATCH qemu-server] api2: fix vmconfig_apply_pending errors handling

2021-07-06 Thread Oguz Bektas
hi, true, it seems that parameter was leftover! thanks for noticing that. now i tested alexandre's patch. when i have a pending change that cannot be applied, the appropriate error message is returned (unable to apply pending change: foo) and the it stays in [PENDING] section of config. Tested-

[pve-devel] [PATCH storage] lvm: wipe signatures on lvcreate

2021-07-06 Thread Stoiko Ivanov
With PVE 7.0 we use upstream's lvm2 packages, which seem to detect 'more' signatures (and refuse creating lvs when they are present) This prevents creating new disks on LVM (thick) storages as reported on pve-user [0]. Adding -Wy to wipe signatures, and --yes (to actually wipe them instead of pro

[pve-devel] applied: [PATCH storage] lvm: wipe signatures on lvcreate

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 11:50, Stoiko Ivanov wrote: > With PVE 7.0 we use upstream's lvm2 packages, which seem to detect > 'more' signatures (and refuse creating lvs when they are present) > > This prevents creating new disks on LVM (thick) storages as reported > on pve-user [0]. > > Adding -Wy to wipe signa

[pve-devel] applied: [PATCH qemu-server] api2: fix vmconfig_apply_pending errors handling

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 00:02, Alexandre Derumier wrote: > commit > https://git.proxmox.com/?p=qemu-server.git;a=commit;h=eb5e482ded9ae6aeb6575de9441b79b90a5de531 > > have introduced error handling for offline pending apply, > > - PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, > $storec

[pve-devel] [PATCH] add patch to reload after first install

2021-07-06 Thread Dominik Csapak
when installing for the first time we want to reload the network config, since sometimes the network will not be configured, e.g. when coming from ifupdown. this would break installing ifupdown2 over the network (e.g. ssh) Signed-off-by: Dominik Csapak --- ...load-network-config-on-first-install

[pve-devel] Proxmox VE 7.0 released!

2021-07-06 Thread Martin Maurer
Hi all, It's our pleasure to announce the stable version 7.0 of Proxmox Virtual Environment. It's based on the great Debian 11 "Bullseye" and comes with a 5.11 kernel, QEMU 6.0, LXC 4.0, OpenZFS 2.0.4. and countless enhancements and bugfixes. Here is a selection of the highlights -Debian 11 "

[pve-devel] applied: [PATCH] add patch to reload after first install

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 13:42, Dominik Csapak wrote: > when installing for the first time we want to reload the network config, > since sometimes the network will not be configured, e.g. when > coming from ifupdown. this would break installing ifupdown2 over > the network (e.g. ssh) > > Signed-off-by: Dominik

[pve-devel] [PATCH proxmox-archive-keyring] bump version to 2.0

2021-07-06 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler --- debian/changelog | 6 ++ debian/proxmox-archive-keyring.install | 1 - debian/proxmox-archive-keyring.maintscript | 1 + debian/proxmox-release-stretch.gpg | Bin 1181 -> 0 bytes 4 files changed, 7 insertions(+)

[pve-devel] [PATCH manager 1/5] pve6to7: use new flags API

2021-07-06 Thread Fabian Grünbichler
the old one is not available post-upgrade, let's use a single codepath for this. the new API only allows querying user-settable flags, but the only flags we check besides 'noout' are not relevant for an upgrade of PVE 6.x to 7.x (PVE 6.x only supports Nautilus+ which requires these flags to be set

[pve-devel] [PATCH manager 2/5] pve6to7: remove PASS noise for ceph

2021-07-06 Thread Fabian Grünbichler
these were mostly relevant for the Luminous -> Nautilus upgrade, and we don't need to list all the default passing states that our tooling sets up anyway. Signed-off-by: Fabian Grünbichler --- PVE/CLI/pve6to7.pm | 8 1 file changed, 8 deletions(-) diff --git a/PVE/CLI/pve6to7.pm b/PVE/

[pve-devel] [PATCH manager 3/5] pve6to7: check for >= Octopus

2021-07-06 Thread Fabian Grünbichler
and drop the Nautilus OSD upgrade check while we are at it.. Signed-off-by: Fabian Grünbichler --- PVE/CLI/pve6to7.pm | 8 ++-- 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm index 65ee5a66..00f922bb 100644 --- a/PVE/CLI/pve6to7.pm +++ b

[pve-devel] [PATCH manager 4/5] pve6to7: dont guard noout check on Ceph version

2021-07-06 Thread Fabian Grünbichler
we don't have a mandatory Ceph major version upgrade this time around, so this check does not make sense. instead, we want noout until the full cluster is upgraded. let's use the simple approach and just flip the switch to "turn off noout if all of Ceph is a single version" in the PVE 7.x branch.

[pve-devel] [PATCH stable-6 manager 5/5] pve6to7: enable noout before upgrade

2021-07-06 Thread Fabian Grünbichler
even if the cluster-wide Ceph versions are uniform. Signed-off-by: Fabian Grünbichler --- PVE/CLI/pve6to7.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm index 36e6676f..db93fa68 100644 --- a/PVE/CLI/pve6to7.pm +++ b/PVE/CLI/pve6to7.

[pve-devel] [PATCH manager 0/5] pve6to7 ceph fixes

2021-07-06 Thread Fabian Grünbichler
reduce checks, adapt version guards, make the whole thing work with pve-manager 7.x last patch is stable-6 only, rest is for both branches. Fabian Grünbichler (5): pve6to7: use new flags API pve6to7: remove PASS noise for ceph pve6to7: check for >= Octopus pve6to7: dont guard noout check

[pve-devel] [PATCH widget-toolkit] start node disk view unexpanded

2021-07-06 Thread Oguz Bektas
gets clunky with a lot of disks and partitions when all of them are expanded by default. so we can set the default to 'false' and let the user expand as they wish. Signed-off-by: Oguz Bektas --- requested by user on forum: https://forum.proxmox.com/threads/start-disk-view-unexpanded.89195/ sr

[pve-devel] [PATCH manager] pve6to7: add check for Debian security repository

2021-07-06 Thread Fabian Ebner
since the pattern for the suite changed. Signed-off-by: Fabian Ebner --- PVE/CLI/pve6to7.pm | 71 ++ 1 file changed, 71 insertions(+) diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm index 163f5e4a..6c1c3726 100644 --- a/PVE/CLI/pve6to7.pm +++ b/P

[pve-devel] applied-series: [PATCH manager 0/5] pve6to7 ceph fixes

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 14:13, Fabian Grünbichler wrote: > reduce checks, adapt version guards, make the whole thing work with > pve-manager 7.x > > last patch is stable-6 only, rest is for both branches. > > Fabian Grünbichler (5): > pve6to7: use new flags API > pve6to7: remove PASS noise for ceph > p

Re: [pve-devel] [PATCH widget-toolkit] start node disk view unexpanded

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 14:20, Oguz Bektas wrote: > gets clunky with a lot of disks and partitions when all of them are > expanded by default. > so we can set the default to 'false' and let the user expand as they wish. > > Signed-off-by: Oguz Bektas > --- > > requested by user on forum: > https://forum.pro

[pve-devel] [PATCH storage] extract backup config: less precise matching for broken pipe detection

2021-07-06 Thread Fabian Ebner
Extracting the config for zstd compressed vma files was broken: Failed to extract config from VMA archive: zstd: error 70 : Write error : cannot write decoded block : Broken pipe (500) since the error message changed and wouldn't match anymore. Signed-off-by: Fabian Ebner --- Hotfix for

[pve-devel] applied: [PATCH storage] extract backup config: less precise matching for broken pipe detection

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 15:47, Fabian Ebner wrote: > Extracting the config for zstd compressed vma files was broken: > Failed to extract config from VMA archive: zstd: error 70 : Write > error : cannot write decoded block : Broken pipe (500) > since the error message changed and wouldn't match anymore.

[pve-devel] [PATCH docs] storage: add minimal zfs over iscsi doc

2021-07-06 Thread Stoiko Ivanov
mostly copied from the wiki-page[0], and adapted to include LIO as target provider. Additionally I added a note to explain that the plugin needs ZFS on the target side (and does not make your SAN speak ZFS) Tested during the PVE 7.0 tests for the plugin I did. [0] https://pve.proxmox.com/wiki/St