On July 6, 2021 12:02 am, Alexandre Derumier wrote:
> commit
> https://git.proxmox.com/?p=qemu-server.git;a=commit;h=eb5e482ded9ae6aeb6575de9441b79b90a5de531
>
> have introduced error handling for offline pending apply,
>
> - PVE::QemuServer::vmconfig_apply_pending($vmid, $conf,
>
hi,
true, it seems that parameter was leftover! thanks for noticing that.
now i tested alexandre's patch.
when i have a pending change that cannot be applied, the appropriate
error message is returned (unable to apply pending change: foo) and the
it stays in [PENDING] section of config.
Tested-
With PVE 7.0 we use upstream's lvm2 packages, which seem to detect
'more' signatures (and refuse creating lvs when they are present)
This prevents creating new disks on LVM (thick) storages as reported
on pve-user [0].
Adding -Wy to wipe signatures, and --yes (to actually wipe them
instead of pro
On 06.07.21 11:50, Stoiko Ivanov wrote:
> With PVE 7.0 we use upstream's lvm2 packages, which seem to detect
> 'more' signatures (and refuse creating lvs when they are present)
>
> This prevents creating new disks on LVM (thick) storages as reported
> on pve-user [0].
>
> Adding -Wy to wipe signa
On 06.07.21 00:02, Alexandre Derumier wrote:
> commit
> https://git.proxmox.com/?p=qemu-server.git;a=commit;h=eb5e482ded9ae6aeb6575de9441b79b90a5de531
>
> have introduced error handling for offline pending apply,
>
> - PVE::QemuServer::vmconfig_apply_pending($vmid, $conf,
> $storec
when installing for the first time we want to reload the network config,
since sometimes the network will not be configured, e.g. when
coming from ifupdown. this would break installing ifupdown2 over
the network (e.g. ssh)
Signed-off-by: Dominik Csapak
---
...load-network-config-on-first-install
Hi all,
It's our pleasure to announce the stable version 7.0 of Proxmox Virtual Environment. It's
based on the great Debian 11 "Bullseye" and comes with a 5.11 kernel, QEMU 6.0,
LXC 4.0, OpenZFS 2.0.4. and countless enhancements and bugfixes.
Here is a selection of the highlights
-Debian 11 "
On 06.07.21 13:42, Dominik Csapak wrote:
> when installing for the first time we want to reload the network config,
> since sometimes the network will not be configured, e.g. when
> coming from ifupdown. this would break installing ifupdown2 over
> the network (e.g. ssh)
>
> Signed-off-by: Dominik
Signed-off-by: Fabian Grünbichler
---
debian/changelog | 6 ++
debian/proxmox-archive-keyring.install | 1 -
debian/proxmox-archive-keyring.maintscript | 1 +
debian/proxmox-release-stretch.gpg | Bin 1181 -> 0 bytes
4 files changed, 7 insertions(+)
the old one is not available post-upgrade, let's use a single codepath
for this.
the new API only allows querying user-settable flags, but the only flags
we check besides 'noout' are not relevant for an upgrade of PVE 6.x to
7.x (PVE 6.x only supports Nautilus+ which requires these flags to be
set
these were mostly relevant for the Luminous -> Nautilus upgrade, and we
don't need to list all the default passing states that our tooling sets
up anyway.
Signed-off-by: Fabian Grünbichler
---
PVE/CLI/pve6to7.pm | 8
1 file changed, 8 deletions(-)
diff --git a/PVE/CLI/pve6to7.pm b/PVE/
and drop the Nautilus OSD upgrade check while we are at it..
Signed-off-by: Fabian Grünbichler
---
PVE/CLI/pve6to7.pm | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index 65ee5a66..00f922bb 100644
--- a/PVE/CLI/pve6to7.pm
+++ b
we don't have a mandatory Ceph major version upgrade this time around,
so this check does not make sense. instead, we want noout until the full
cluster is upgraded. let's use the simple approach and just flip the
switch to "turn off noout if all of Ceph is a single version" in the PVE
7.x branch.
even if the cluster-wide Ceph versions are uniform.
Signed-off-by: Fabian Grünbichler
---
PVE/CLI/pve6to7.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index 36e6676f..db93fa68 100644
--- a/PVE/CLI/pve6to7.pm
+++ b/PVE/CLI/pve6to7.
reduce checks, adapt version guards, make the whole thing work with
pve-manager 7.x
last patch is stable-6 only, rest is for both branches.
Fabian Grünbichler (5):
pve6to7: use new flags API
pve6to7: remove PASS noise for ceph
pve6to7: check for >= Octopus
pve6to7: dont guard noout check
gets clunky with a lot of disks and partitions when all of them are
expanded by default.
so we can set the default to 'false' and let the user expand as they wish.
Signed-off-by: Oguz Bektas
---
requested by user on forum:
https://forum.proxmox.com/threads/start-disk-view-unexpanded.89195/
sr
since the pattern for the suite changed.
Signed-off-by: Fabian Ebner
---
PVE/CLI/pve6to7.pm | 71 ++
1 file changed, 71 insertions(+)
diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index 163f5e4a..6c1c3726 100644
--- a/PVE/CLI/pve6to7.pm
+++ b/P
On 06.07.21 14:13, Fabian Grünbichler wrote:
> reduce checks, adapt version guards, make the whole thing work with
> pve-manager 7.x
>
> last patch is stable-6 only, rest is for both branches.
>
> Fabian Grünbichler (5):
> pve6to7: use new flags API
> pve6to7: remove PASS noise for ceph
> p
On 06.07.21 14:20, Oguz Bektas wrote:
> gets clunky with a lot of disks and partitions when all of them are
> expanded by default.
> so we can set the default to 'false' and let the user expand as they wish.
>
> Signed-off-by: Oguz Bektas
> ---
>
> requested by user on forum:
> https://forum.pro
Extracting the config for zstd compressed vma files was broken:
Failed to extract config from VMA archive: zstd: error 70 : Write
error : cannot write decoded block : Broken pipe (500)
since the error message changed and wouldn't match anymore.
Signed-off-by: Fabian Ebner
---
Hotfix for
On 06.07.21 15:47, Fabian Ebner wrote:
> Extracting the config for zstd compressed vma files was broken:
> Failed to extract config from VMA archive: zstd: error 70 : Write
> error : cannot write decoded block : Broken pipe (500)
> since the error message changed and wouldn't match anymore.
mostly copied from the wiki-page[0], and adapted to include LIO as
target provider.
Additionally I added a note to explain that the plugin needs ZFS on
the target side (and does not make your SAN speak ZFS)
Tested during the PVE 7.0 tests for the plugin I did.
[0] https://pve.proxmox.com/wiki/St
22 matches
Mail list logo