Re: [pve-devel] [PATCH qemu-server] api2: fix vmconfig_apply_pending errors handling

2021-07-06 Thread Fabian Grünbichler
On July 6, 2021 12:02 am, Alexandre Derumier wrote:
> commit
> https://git.proxmox.com/?p=qemu-server.git;a=commit;h=eb5e482ded9ae6aeb6575de9441b79b90a5de531
> 
> have introduced error handling for offline pending apply,
> 
> -   PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> $storecfg, $running);
> +   PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> $storecfg, $running, $errors);
> 
>  sub vmconfig_apply_pending {
> -my ($vmid, $conf, $storecfg) = @_;
> +my ($vmid, $conf, $storecfg, $errors) = @_;
> 
> but they was wrong nonused $running param, so currently $errors are not 
> correctly handled

$running was indeed not used since the introduction of hotplug 
functionality in 2015 - but this also means that half of that commit was 
not actually tested (@Oguz - please take another look and confirm it 
works as expected WITH this patch here).

errors should still be handled "correctly" without this patch (as in, 
mostly not ;)), the main difference is whether changing the config 
returns a proper error or not when applying pending changes fails.

> 
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/API2/Qemu.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 1e540f5..f2557e3 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -1413,7 +1413,7 @@ my $update_vm_api  = sub {
>   if ($running) {
>   PVE::QemuServer::vmconfig_hotplug_pending($vmid, $conf, 
> $storecfg, $modified, $errors);
>   } else {
> - PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> $storecfg, $running, $errors);
> + PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> $storecfg, $errors);
>   }
>   raise_param_exc($errors) if scalar(keys %$errors);
>  
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server] api2: fix vmconfig_apply_pending errors handling

2021-07-06 Thread Oguz Bektas
hi,

true, it seems that parameter was leftover! thanks for noticing that.

now i tested alexandre's patch.

when i have a pending change that cannot be applied, the appropriate
error message is returned (unable to apply pending change: foo) and the
it stays in [PENDING] section of config.

Tested-by: Oguz Bektas 


On Tue, Jul 06, 2021 at 10:10:23AM +0200, Fabian Grünbichler wrote:
> On July 6, 2021 12:02 am, Alexandre Derumier wrote:
> > commit
> > https://git.proxmox.com/?p=qemu-server.git;a=commit;h=eb5e482ded9ae6aeb6575de9441b79b90a5de531
> > 
> > have introduced error handling for offline pending apply,
> > 
> > -   PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> > $storecfg, $running);
> > +   PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> > $storecfg, $running, $errors);
> > 
> >  sub vmconfig_apply_pending {
> > -my ($vmid, $conf, $storecfg) = @_;
> > +my ($vmid, $conf, $storecfg, $errors) = @_;
> > 
> > but they was wrong nonused $running param, so currently $errors are not 
> > correctly handled
> 
> $running was indeed not used since the introduction of hotplug 
> functionality in 2015 - but this also means that half of that commit was 
> not actually tested (@Oguz - please take another look and confirm it 
> works as expected WITH this patch here).
> 
> errors should still be handled "correctly" without this patch (as in, 
> mostly not ;)), the main difference is whether changing the config 
> returns a proper error or not when applying pending changes fails.
> 
> > 
> > Signed-off-by: Alexandre Derumier 
> > ---
> >  PVE/API2/Qemu.pm | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> > index 1e540f5..f2557e3 100644
> > --- a/PVE/API2/Qemu.pm
> > +++ b/PVE/API2/Qemu.pm
> > @@ -1413,7 +1413,7 @@ my $update_vm_api  = sub {
> > if ($running) {
> > PVE::QemuServer::vmconfig_hotplug_pending($vmid, $conf, 
> > $storecfg, $modified, $errors);
> > } else {
> > -   PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> > $storecfg, $running, $errors);
> > +   PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> > $storecfg, $errors);
> > }
> > raise_param_exc($errors) if scalar(keys %$errors);
> >  
> > -- 
> > 2.20.1
> > 
> > 
> > ___
> > pve-devel mailing list
> > pve-devel@lists.proxmox.com
> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> > 
> > 
> > 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage] lvm: wipe signatures on lvcreate

2021-07-06 Thread Stoiko Ivanov
With PVE 7.0 we use upstream's lvm2 packages, which seem to detect
'more' signatures (and refuse creating lvs when they are present)

This prevents creating new disks on LVM (thick) storages as reported
on pve-user [0].

Adding -Wy to wipe signatures, and --yes (to actually wipe them
instead of prompting) fixes the aborted lvcreate.

Adding only to LVMPlugin and not to the lvcreate calls in
LvmThinPlugin, since I assume (and my quick tests confirm) that thin
pools are not affected by this issue..

Tested on a virtual test-setup with a LVM storage on a (virtual) iscsi
target and a local lvmthin storage.

[0] https://lists.proxmox.com/pipermail/pve-user/2021-July/172660.html

Signed-off-by: Stoiko Ivanov 
---
 PVE/Storage/LVMPlugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 039bfc1..139d391 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -330,7 +330,7 @@ sub lvcreate {
$size .= "k"; # default to kilobytes
 }
 
-my $cmd = ['/sbin/lvcreate', '-aly', '--size', $size, '--name', $name];
+my $cmd = ['/sbin/lvcreate', '-aly', '-Wy', '--yes', '--size', $size, 
'--name', $name];
 for my $tag (@$tags) {
push @$cmd, '--addtag', $tag;
 }
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH storage] lvm: wipe signatures on lvcreate

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 11:50, Stoiko Ivanov wrote:
> With PVE 7.0 we use upstream's lvm2 packages, which seem to detect
> 'more' signatures (and refuse creating lvs when they are present)
> 
> This prevents creating new disks on LVM (thick) storages as reported
> on pve-user [0].
> 
> Adding -Wy to wipe signatures, and --yes (to actually wipe them
> instead of prompting) fixes the aborted lvcreate.
> 
> Adding only to LVMPlugin and not to the lvcreate calls in
> LvmThinPlugin, since I assume (and my quick tests confirm) that thin
> pools are not affected by this issue..
> 
> Tested on a virtual test-setup with a LVM storage on a (virtual) iscsi
> target and a local lvmthin storage.
> 
> [0] https://lists.proxmox.com/pipermail/pve-user/2021-July/172660.html
> 
> Signed-off-by: Stoiko Ivanov 
> ---
>  PVE/Storage/LVMPlugin.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] api2: fix vmconfig_apply_pending errors handling

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 00:02, Alexandre Derumier wrote:
> commit
> https://git.proxmox.com/?p=qemu-server.git;a=commit;h=eb5e482ded9ae6aeb6575de9441b79b90a5de531
> 
> have introduced error handling for offline pending apply,
> 
> -   PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> $storecfg, $running);
> +   PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
> $storecfg, $running, $errors);
> 
>  sub vmconfig_apply_pending {
> -my ($vmid, $conf, $storecfg) = @_;
> +my ($vmid, $conf, $storecfg, $errors) = @_;
> 
> but they was wrong nonused $running param, so currently $errors are not 
> correctly handled
> 
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/API2/Qemu.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, with Oguz T-b, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH] add patch to reload after first install

2021-07-06 Thread Dominik Csapak
when installing for the first time we want to reload the network config,
since sometimes the network will not be configured, e.g. when
coming from ifupdown. this would break installing ifupdown2 over
the network (e.g. ssh)

Signed-off-by: Dominik Csapak 
---
 ...load-network-config-on-first-install.patch | 26 +++
 debian/patches/series |  1 +
 2 files changed, 27 insertions(+)
 create mode 100644 
debian/patches/pve/0013-postinst-reload-network-config-on-first-install.patch

diff --git 
a/debian/patches/pve/0013-postinst-reload-network-config-on-first-install.patch 
b/debian/patches/pve/0013-postinst-reload-network-config-on-first-install.patch
new file mode 100644
index 000..25c6851
--- /dev/null
+++ 
b/debian/patches/pve/0013-postinst-reload-network-config-on-first-install.patch
@@ -0,0 +1,26 @@
+From  Mon Sep 17 00:00:00 2001
+From: Dominik Csapak 
+Date: Tue, 6 Jul 2021 13:11:24 +0200
+Subject: [PATCH] postinst: reload network config on first install
+
+Signed-off-by: Dominik Csapak 
+---
+ debian/ifupdown2.postinst | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/debian/ifupdown2.postinst b/debian/ifupdown2.postinst
+index b7de485..eaade7c 100644
+--- a/debian/ifupdown2.postinst
 b/debian/ifupdown2.postinst
+@@ -113,6 +113,8 @@ case "$1" in
+ postinst_remove_diverts
+ if [ -f "/tmp/.ifupdown2-first-install" ]; then
+ proxmox_compatibility
++echo "Reloading network config on first install"
++ifreload -a
+ rm  /tmp/.ifupdown2-first-install
+ fi
+ ;;
+-- 
+2.30.2
+
diff --git a/debian/patches/series b/debian/patches/series
index 2cb57a0..c8bcffb 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -10,3 +10,4 @@ pve/0009-allow-vlan-tag-inside-vxlan-tunnel.patch
 pve/0010-postinst-rm-update-network-config-compatibility.patch
 pve/0011-d-rules-drop-now-default-with-systemd.patch
 pve/0012-d-rules-add-dh_installsystemd-override-for-compat-12.patch
+pve/0013-postinst-reload-network-config-on-first-install.patch
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] Proxmox VE 7.0 released!

2021-07-06 Thread Martin Maurer

Hi all,

It's our pleasure to announce the stable version 7.0 of Proxmox Virtual Environment. It's 
based on the great Debian 11 "Bullseye" and comes with a 5.11 kernel, QEMU 6.0, 
LXC 4.0, OpenZFS 2.0.4. and countless enhancements and bugfixes.

Here is a selection of the highlights

-Debian 11 "Bullseye", but using a newer Linux kernel 5.11
- LXC 4.0, QEMU 6.0, OpenZFS 2.0.4
- Ceph Pacific 16.2 as new default; Ceph Octopus 15.2 remains supported.
- Btrfs storage technology with subvolume snapshots, built-in RAID, and 
self-healing via checksumming for data and metadata.
- New ‘Repositories’ Panel for easy management of the package repositories with 
the GUI.
- Single Sign-On (SSO) with OpenID Connect
- QEMU 6.0 with ‘io_uring’, a clean-up option for un-referenced VM disks
- LXC 4.0 has full support for cgroups2
- Reworked Proxmox installer environment
- ACME standalone plugin with improved support for dual-stacked (IPv4 and IPv6) 
environments
- ifupdown2 as default for new installations
- chrony as the default NTP daemon
- and many more enhancements, bugfixes, etc.

As always, we have included countless bugfixes and improvements on many places; 
see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.0

Press release
https://www.proxmox.com/en/news/press-releases/proxmox-virtual-environment-7-0

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-7-0

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

We want to shout out a big THANK YOU to our active community for all your 
intensive feedback, testing, bug reporting and patch submitting!

FAQ
Q: Can I upgrade Proxmox VE 6.4 to 7.0 with apt?
A: Please follow the upgrade instructions on 
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

Q: Can I upgrade a 7.0 beta installation to the stable 7.0 release via apt?
A: Yes.

Q: Can I install Proxmox VE 7.0 on top of Debian 11 "Bullseye"?
A: Yes, see 
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye

Q: Why is Proxmox VE 7.0 released ahead of the stable Debian 11 release?
A: The Debian project team postponed their plans for the May release mainly due 
to an unresolved issue in the Debian installer. Since we maintain our own 
Proxmox installer, we are not affected by this particular issue, therefore we 
have decided to release earlier. The core packages of Proxmox VE are either 
maintained by the Proxmox team or are already subject to the very strict Debian 
freeze policy for essential packages.

Q: Can I upgrade my Proxmox VE 6.4 cluster with Ceph Octopus to 7.0 with Ceph 
Octopus or even Pacific?
A: This is a two step process. First, you have to upgrade Proxmox VE from 6.4 
to 7.0, and afterwards upgrade Ceph from Octopus to Pacific. There are a lot of 
improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our 
newsletter.

--
Best Regards,

Martin Maurer

mar...@proxmox.com
https://www.proxmox.com


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH] add patch to reload after first install

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 13:42, Dominik Csapak wrote:
> when installing for the first time we want to reload the network config,
> since sometimes the network will not be configured, e.g. when
> coming from ifupdown. this would break installing ifupdown2 over
> the network (e.g. ssh)
> 
> Signed-off-by: Dominik Csapak 
> ---
>  ...load-network-config-on-first-install.patch | 26 +++
>  debian/patches/series |  1 +
>  2 files changed, 27 insertions(+)
>  create mode 100644 
> debian/patches/pve/0013-postinst-reload-network-config-on-first-install.patch
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox-archive-keyring] bump version to 2.0

2021-07-06 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
 debian/changelog   |   6 ++
 debian/proxmox-archive-keyring.install |   1 -
 debian/proxmox-archive-keyring.maintscript |   1 +
 debian/proxmox-release-stretch.gpg | Bin 1181 -> 0 bytes
 4 files changed, 7 insertions(+), 1 deletion(-)
 create mode 100644 debian/proxmox-archive-keyring.maintscript
 delete mode 100644 debian/proxmox-release-stretch.gpg

diff --git a/debian/changelog b/debian/changelog
index b619c98..f52275c 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+proxmox-archive-keyring (2.0) pbs pmg pve; urgency=medium
+
+  * remove Stretch / 5.x release key
+
+ -- Proxmox Support Team   Tue, 6 Jul 2021 14:02:26 +0200
+
 proxmox-archive-keyring (1.1) pbs pmg pve; urgency=medium
 
   * add new release key for Debian Bullseye
diff --git a/debian/proxmox-archive-keyring.install 
b/debian/proxmox-archive-keyring.install
index bf4ff15..c3e0f19 100644
--- a/debian/proxmox-archive-keyring.install
+++ b/debian/proxmox-archive-keyring.install
@@ -1,3 +1,2 @@
-debian/proxmox-release-stretch.gpg etc/apt/trusted.gpg.d/
 debian/proxmox-release-buster.gpg etc/apt/trusted.gpg.d/
 debian/proxmox-release-bullseye.gpg etc/apt/trusted.gpg.d/
diff --git a/debian/proxmox-archive-keyring.maintscript 
b/debian/proxmox-archive-keyring.maintscript
new file mode 100644
index 000..c755bae
--- /dev/null
+++ b/debian/proxmox-archive-keyring.maintscript
@@ -0,0 +1 @@
+rm_conffile /etc/apt/trusted.gpg.d/proxmox-release-stretch.gpg 2.0~~
diff --git a/debian/proxmox-release-stretch.gpg 
b/debian/proxmox-release-stretch.gpg
deleted file mode 100644
index 
8488f4597a19764cefa9f505198cf9cade46a7a7..
GIT binary patch
literal 0
HcmV?d1

literal 1181
zcmV;O1Y-M{0u2OL!^(#N5CFlbWz^cmS$iU3j`tWHxifM%&-CJff@AGol
zsH@ILM|Nb@iXiJR!?bucn(rCU?!EmI{EML$~s&c;(5!^@tb|ry{`aO#N9+z^lU6`^U=!f7Tc$!
zoJu!T$l!B7y`0~@JAD*={9he%hY3_Mf^*m_n{^#=7AZ`V;vucOl(~YCCW7VZe`h<2?yaA!=_po7;jXswMD!B8Z8z
zv~|Y_Wr@@QYUi%aIdM4CyU;e1HqVb+HdJ^)i{)PH%+mr(Dvyg5H|>w4qss@3b=}2B
zJVsD78XMLlq!+f)>p@F#6Hg?f4tz~`wNZO%${Xjm>06*^Ldp3*24+}O`)fDgL+eY0
zly<6cva%?+#m{)d>C-*cJO03{B;`!5Q)oB~kK)iFk$rTH+FUqC)t^o&dlK4i?MM<{
z1SU1+FvTC!pxXcu0RRECNlxCFZDnqBAT=&{AW~&)
zWnpt=AWLO=AUtq#Z+LBQcr9{eY-M3{Wk7IpZ+LBQcrIgaZ9a(tKLis22mmPs0$0Pz
zhXNY|1ql+&0{{mL2?z%R0s#gU2m%QT3j`Jd0|5da0Rk6*0162Z4VoEH;_n6nIz$it
zD62qC$c7Ov6A}vxxO3<;ku+a|R)os7jNY@JZ0P1f6_|CzlY;Si>S;2902em#|7NW2>mD{k4seJ&(fq
zKbBzToOheFC9lLCveIcBXbe2>4;<$fAUb)={>gB0=oB}<=IWOsdVX6XII*|BN~?M$
zWHpZ-)zgJP<2#`Onrs!pt^LXQD@AhXs*iy{
zGRVl$2mg#JlKI=JbN2)Q^@YYqmVS#jA2w}xrp0`|ERjkM#gJ6>YiM-?0
zaGVG(Nrs}d?IiohfiKng(O8Fh`a?mMW0(T@+itGYCodZ!=xW5$W~QiniSA{$&RDk0
zDO5xvT6p{&cgVhKc5_1n_QDP5TpK`u=3se|ZVQ8}8h6U(82z8})ztXRz)7WwAwxA2
z&KNr8DY^SB8XS6U+M3NBD;MArAN}nLN+un7G|s#2IFOq*#h4IJ1CUQG9M6?lxOwS=
v%Kl)1hf;D({;(*NGd#D)wu}8xdg|eIA%q2|pVoL}OgjDN+!-q6Et(MSB{?cK

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/5] pve6to7: use new flags API

2021-07-06 Thread Fabian Grünbichler
the old one is not available post-upgrade, let's use a single codepath
for this.

the new API only allows querying user-settable flags, but the only flags
we check besides 'noout' are not relevant for an upgrade of PVE 6.x to
7.x (PVE 6.x only supports Nautilus+ which requires these flags to be
set in order to work) so we can just drop those outdated checks instead
of extending/refactoring the API.

Signed-off-by: Fabian Grünbichler 
---
 PVE/CLI/pve6to7.pm | 19 ++-
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index 60edac11..c693e0d4 100644
--- a/PVE/CLI/pve6to7.pm
+++ b/PVE/CLI/pve6to7.pm
@@ -8,6 +8,7 @@ use PVE::API2::Ceph;
 use PVE::API2::LXC;
 use PVE::API2::Qemu;
 use PVE::API2::Certificates;
+use PVE::API2::Cluster::Ceph;
 
 use PVE::AccessControl;
 use PVE::Ceph::Tools;
@@ -389,9 +390,12 @@ sub check_ceph {
 
 log_info("getting Ceph status/health information..");
 my $ceph_status = eval { PVE::API2::Ceph->status({ node => $nodename }); };
-my $osd_flags = eval { PVE::API2::Ceph->get_flags({ node => $nodename }); 
};
+my $noout = eval { PVE::API2::Cluster::Ceph->get_flag({ flag => "noout" 
}); };
+if ($@) {
+   log_fail("failed to get 'noout' flag status - $@");
+}
+
 my $noout_wanted = 1;
-my $noout = $osd_flags && $osd_flags =~ m/noout/;
 
 if (!$ceph_status || !$ceph_status->{health}) {
log_fail("unable to determine Ceph status!");
@@ -409,17 +413,6 @@ sub check_ceph {
}
 }
 
-log_info("getting Ceph OSD flags..");
-eval {
-   if (!$osd_flags) {
-   log_fail("unable to get Ceph OSD flags!");
-   } else {
-   if (!($osd_flags =~ m/recovery_deletes/ && $osd_flags =~ 
m/purged_snapdirs/)) {
-   log_fail("missing 'recovery_deletes' and/or 'purged_snapdirs' 
flag, scrub of all PGs required before upgrading to Nautilus!");
-   }
-   }
-};
-
 # TODO: check OSD min-required version, if to low it breaks stuff!
 
 log_info("getting Ceph daemon versions..");
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 2/5] pve6to7: remove PASS noise for ceph

2021-07-06 Thread Fabian Grünbichler
these were mostly relevant for the Luminous -> Nautilus upgrade, and we
don't need to list all the default passing states that our tooling sets
up anyway.

Signed-off-by: Fabian Grünbichler 
---
 PVE/CLI/pve6to7.pm | 8 
 1 file changed, 8 deletions(-)

diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index c693e0d4..65ee5a66 100644
--- a/PVE/CLI/pve6to7.pm
+++ b/PVE/CLI/pve6to7.pm
@@ -473,8 +473,6 @@ sub check_ceph {
my $global_monhost = $global->{mon_host} // $global->{"mon host"} // 
$global->{"mon-host"};
if (!defined($global_monhost)) {
log_warn("No 'mon_host' entry found in ceph config.\n  It's 
recommended to add mon_host with all monitor addresses (without ports) to the 
global section.");
-   } else {
-   log_pass("Found 'mon_host' entry.");
}
 
my $ipv6 = $global->{ms_bind_ipv6} // $global->{"ms bind ipv6"} // 
$global->{"ms-bind-ipv6"};
@@ -482,17 +480,11 @@ sub check_ceph {
my $ipv4 = $global->{ms_bind_ipv4} // $global->{"ms bind ipv4"} // 
$global->{"ms-bind-ipv4"};
if ($ipv6 eq 'true' && (!defined($ipv4) || $ipv4 ne 'false')) {
log_warn("'ms_bind_ipv6' is enabled but 'ms_bind_ipv4' is not 
disabled.\n  Make sure to disable 'ms_bind_ipv4' for ipv6 only clusters, or add 
an ipv4 network to public/cluster network.");
-   } else {
-   log_pass("'ms_bind_ipv6' is enabled and 'ms_bind_ipv4' 
disabled");
}
-   } else {
-   log_pass("'ms_bind_ipv6' not enabled");
}
 
if (defined($global->{keyring})) {
log_warn("[global] config section contains 'keyring' option, which 
will prevent services from starting with Nautilus.\n Move 'keyring' option to 
[client] section instead.");
-   } else {
-   log_pass("no 'keyring' option in [global] section found.");
}
 
 } else {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 3/5] pve6to7: check for >= Octopus

2021-07-06 Thread Fabian Grünbichler
and drop the Nautilus OSD upgrade check while we are at it..

Signed-off-by: Fabian Grünbichler 
---
 PVE/CLI/pve6to7.pm | 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index 65ee5a66..00f922bb 100644
--- a/PVE/CLI/pve6to7.pm
+++ b/PVE/CLI/pve6to7.pm
@@ -493,12 +493,8 @@ sub check_ceph {
 
 my $local_ceph_ver = PVE::Ceph::Tools::get_local_version(1);
 if (defined($local_ceph_ver)) {
-   if ($local_ceph_ver == 14) {
-   my $ceph_volume_osds = PVE::Ceph::Tools::ceph_volume_list();
-   my $scanned_osds = PVE::Tools::dir_glob_regex('/etc/ceph/osd', 
'^.*\.json$');
-   if (-e '/var/lib/ceph/osd/' && !defined($scanned_osds) && !(keys 
%$ceph_volume_osds)) {
-   log_warn("local Ceph version is Nautilus, local OSDs detected, 
but no conversion from ceph-disk to ceph-volume done (yet).");
-   }
+   if ($local_ceph_ver <= 14) {
+   log_fail("local Ceph version too low, at least Octopus required..");
}
 } else {
log_fail("unable to determine local Ceph version.");
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 4/5] pve6to7: dont guard noout check on Ceph version

2021-07-06 Thread Fabian Grünbichler
we don't have a mandatory Ceph major version upgrade this time around,
so this check does not make sense. instead, we want noout until the full
cluster is upgraded. let's use the simple approach and just flip the
switch to "turn off noout if all of Ceph is a single version" in the PVE
7.x branch.

Signed-off-by: Fabian Grünbichler 
---

Notes:
next patch sets it for the stable-6 branch

 PVE/CLI/pve6to7.pm | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index 00f922bb..36e6676f 100644
--- a/PVE/CLI/pve6to7.pm
+++ b/PVE/CLI/pve6to7.pm
@@ -447,9 +447,7 @@ sub check_ceph {
log_warn("unable to determine overall Ceph daemon versions!");
} elsif (keys %$overall_versions == 1) {
log_pass("single running overall version detected for all Ceph 
daemon types.");
-   if ((keys %$overall_versions)[0] =~ /^ceph version 15\./) {
-   $noout_wanted = 0;
-   }
+   $noout_wanted = 0; # off post-upgrade, on pre-upgrade
} else {
log_warn("overall version mismatch detected, check 'ceph versions' 
output for details!");
}
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH stable-6 manager 5/5] pve6to7: enable noout before upgrade

2021-07-06 Thread Fabian Grünbichler
even if the cluster-wide Ceph versions are uniform.

Signed-off-by: Fabian Grünbichler 
---
 PVE/CLI/pve6to7.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index 36e6676f..db93fa68 100644
--- a/PVE/CLI/pve6to7.pm
+++ b/PVE/CLI/pve6to7.pm
@@ -447,7 +447,7 @@ sub check_ceph {
log_warn("unable to determine overall Ceph daemon versions!");
} elsif (keys %$overall_versions == 1) {
log_pass("single running overall version detected for all Ceph 
daemon types.");
-   $noout_wanted = 0; # off post-upgrade, on pre-upgrade
+   $noout_wanted = 1; # off post-upgrade, on pre-upgrade
} else {
log_warn("overall version mismatch detected, check 'ceph versions' 
output for details!");
}
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 0/5] pve6to7 ceph fixes

2021-07-06 Thread Fabian Grünbichler
reduce checks, adapt version guards, make the whole thing work with
pve-manager 7.x

last patch is stable-6 only, rest is for both branches.

Fabian Grünbichler (5):
  pve6to7: use new flags API
  pve6to7: remove PASS noise for ceph
  pve6to7: check for >= Octopus
  pve6to7: dont guard noout check on Ceph version
  pve6to7: enable noout before upgrade

 PVE/CLI/pve6to7.pm | 39 +--
 1 file changed, 9 insertions(+), 30 deletions(-)

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH widget-toolkit] start node disk view unexpanded

2021-07-06 Thread Oguz Bektas
gets clunky with a lot of disks and partitions when all of them are
expanded by default.
so we can set the default to 'false' and let the user expand as they wish.

Signed-off-by: Oguz Bektas 
---

requested by user on forum:
https://forum.proxmox.com/threads/start-disk-view-unexpanded.89195/


 src/panel/DiskList.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/panel/DiskList.js b/src/panel/DiskList.js
index 90a6553..acbcdf6 100644
--- a/src/panel/DiskList.js
+++ b/src/panel/DiskList.js
@@ -168,7 +168,7 @@ Ext.define('Proxmox.DiskList', {
for (const item of records) {
let data = item.data;
data.leaf = true;
-   data.expanded = true;
+   data.expanded = false;
data.children = [];
data.iconCls = 'fa fa-fw fa-hdd-o x-fa-tree';
if (!data.parent) {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] pve6to7: add check for Debian security repository

2021-07-06 Thread Fabian Ebner
since the pattern for the suite changed.

Signed-off-by: Fabian Ebner 
---
 PVE/CLI/pve6to7.pm | 71 ++
 1 file changed, 71 insertions(+)

diff --git a/PVE/CLI/pve6to7.pm b/PVE/CLI/pve6to7.pm
index 163f5e4a..6c1c3726 100644
--- a/PVE/CLI/pve6to7.pm
+++ b/PVE/CLI/pve6to7.pm
@@ -1016,6 +1016,76 @@ sub check_containers_cgroup_compat {
 }
 };
 
+sub check_security_repo {
+log_info("Checking if the suite for the Debian security repository is 
correct..");
+
+my $found = 0;
+
+my $dir = '/etc/apt/sources.list.d';
+my $in_dir = 0;
+
+my $check_file = sub {
+   my ($file) = @_;
+
+   $file = "${dir}/${file}" if $in_dir;
+
+   my $raw = eval { PVE::Tools::file_get_contents($file) };
+   return if !defined($raw);
+   my @lines = split(/\n/, $raw);
+
+   my $number = 0;
+   for my $line (@lines) {
+   $number++;
+
+   next if length($line) == 0; # split would result in undef then...
+
+   ($line) = split(/#/, $line);
+
+   next if $line !~ m/^deb/; # is case sensitive
+
+   my $suite;
+
+   # catch any of
+   # https://deb.debian.org/debian-security
+   # http://security.debian.org/debian-security
+   # http://security.debian.org/
+   if ($line =~ 
m|https?://deb\.debian\.org/debian-security/?\s+(\S*)|i) {
+   $suite = $1;
+   } elsif ($line =~ 
m|https?://security\.debian\.org(?:.*?)\s+(\S*)|i) {
+   $suite = $1;
+   } else {
+   next;
+   }
+
+   $found = 1;
+
+   my $where = "in ${file}:${number}";
+
+   if ($suite eq 'buster/updates') {
+   log_info("Make sure to change the suite of the Debian security 
repository " .
+   "from 'buster/updates' to 'bullseye-security' - $where");
+   } elsif ($suite eq 'bullseye-security') {
+   log_pass("already using 'bullseye-security'");
+   } else {
+   log_fail("The new suite of the Debian security repository 
should be " .
+   "'bullseye-security' - $where");
+   }
+   }
+};
+
+$check_file->("/etc/apt/sources.list");
+
+$in_dir = 1;
+
+PVE::Tools::dir_glob_foreach($dir, '^.*\.list$', $check_file);
+
+if (!$found) {
+   # only warn, it might be defined in a .sources file or in a way not 
catched above
+   log_warn("No Debian security repository detected in 
/etc/apt/sources.list and " .
+   "/etc/apt/sources.list.d/*.list");
+}
+}
+
 sub check_misc {
 print_header("MISCELLANEOUS CHECKS");
 my $ssh_config = eval { PVE::Tools::file_get_contents('/root/.ssh/config') 
};
@@ -1118,6 +1188,7 @@ sub check_misc {
 check_custom_pool_roles();
 check_description_lengths();
 check_storage_content();
+check_security_repo();
 }
 
 __PACKAGE__->register_method ({
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH manager 0/5] pve6to7 ceph fixes

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 14:13, Fabian Grünbichler wrote:
> reduce checks, adapt version guards, make the whole thing work with
> pve-manager 7.x
> 
> last patch is stable-6 only, rest is for both branches.
> 
> Fabian Grünbichler (5):
>   pve6to7: use new flags API
>   pve6to7: remove PASS noise for ceph
>   pve6to7: check for >= Octopus
>   pve6to7: dont guard noout check on Ceph version
>   pve6to7: enable noout before upgrade
> 
>  PVE/CLI/pve6to7.pm | 39 +--
>  1 file changed, 9 insertions(+), 30 deletions(-)
> 



applied series to master and stable-6 respectively, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH widget-toolkit] start node disk view unexpanded

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 14:20, Oguz Bektas wrote:
> gets clunky with a lot of disks and partitions when all of them are
> expanded by default.
> so we can set the default to 'false' and let the user expand as they wish.
> 
> Signed-off-by: Oguz Bektas 
> ---
> 
> requested by user on forum:
> https://forum.proxmox.com/threads/start-disk-view-unexpanded.89195/
> 

I do not want that, as by default I want to see more inforamtion.

Rather add a collapse all and expand-all tool and make that state-full so that 
the
last used one sticks. Could be done similar like what the api-viewer has 
(albeit that
one isn't state-full, but that should not be hard):

https://pve.proxmox.com/pve-docs/api-viewer/index.html


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage] extract backup config: less precise matching for broken pipe detection

2021-07-06 Thread Fabian Ebner
Extracting the config for zstd compressed vma files was broken:
Failed to extract config from VMA archive: zstd: error 70 : Write
error : cannot write decoded block : Broken pipe (500)
since the error message changed and wouldn't match anymore.

Signed-off-by: Fabian Ebner 
---

Hotfix for now, isn't there a better way to properly handle this?

 PVE/Storage.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index b8e6311..c04b5a2 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1560,7 +1560,7 @@ sub extract_vzdump_config_vma {
my $errstring;
my $err = sub {
my $output = shift;
-   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/ || $output =~ m/zstd: error 70 : Write error : Broken 
pipe/) {
+   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/ || $output =~ m/zstd: error 70 : Write error.*Broken 
pipe/) {
$broken_pipe = 1;
} elsif (!defined ($errstring) && $output !~ m/^\s*$/) {
$errstring = "Failed to extract config from VMA archive: 
$output\n";
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH storage] extract backup config: less precise matching for broken pipe detection

2021-07-06 Thread Thomas Lamprecht
On 06.07.21 15:47, Fabian Ebner wrote:
> Extracting the config for zstd compressed vma files was broken:
> Failed to extract config from VMA archive: zstd: error 70 : Write
> error : cannot write decoded block : Broken pipe (500)
> since the error message changed and wouldn't match anymore.
> 
> Signed-off-by: Fabian Ebner 
> ---
> 
> Hotfix for now, isn't there a better way to properly handle this?


meh, properly handling it is a bit of a PITA and that's why we landed here,
which, short of style fixing in the output like with zstd here, served us
surprisingly well.

Properly would probably mean one of:
* make the vma tool understand the compressions, adding in quite some libraries
  for that single use case, so not too ideal..
* saving the config in some other way, e.g., outside of the archive, so that we
  either reverse the pipe direction (vma gets config blob and then decompresses
  it) or avoid compression for the, rather small config file completely
* adding a control fd to VMA where it can tell that it closed OK, that way we
  could ignore any error if we get an OK written from vma to that FD.
  That wouldn't sound to bad actually, but did not really thought it out..

>  PVE/Storage.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 

for now that'll do, applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs] storage: add minimal zfs over iscsi doc

2021-07-06 Thread Stoiko Ivanov
mostly copied from the wiki-page[0], and adapted to include LIO as
target provider.

Additionally I added a note to explain that the plugin needs ZFS on
the target side (and does not make your SAN speak ZFS)

Tested during the PVE 7.0 tests for the plugin I did.

[0] https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI

Signed-off-by: Stoiko Ivanov 
---
I plan on adding this about once per Debian release (while testing and
wondering why we don't have that in our reference docs) - but the plan
usually gets replaced by something more urgent (and fun).

 pve-storage-zfs.adoc | 139 +++
 pvesm.adoc   |   2 +
 2 files changed, 141 insertions(+)
 create mode 100644 pve-storage-zfs.adoc

diff --git a/pve-storage-zfs.adoc b/pve-storage-zfs.adoc
new file mode 100644
index 000..6801873
--- /dev/null
+++ b/pve-storage-zfs.adoc
@@ -0,0 +1,139 @@
+[[storage_zfs]]
+ZFS over ISCSI Backend
+--
+ifdef::wiki[]
+:pve-toplevel:
+:title: Storage: ZFS over ISCSI
+endif::wiki[]
+
+Storage pool type: `zfs`
+
+This backend accesses a remote machine having a ZFS pool as storage and an 
iSCSI
+target implementation via `ssh`. For each guest disk it creates a ZVOL and,
+exports it as iSCSI LUN. This LUN is used by {pve} for the guest disk.
+
+The following iSCSI target implementations are supported:
+
+* LIO (Linux)
+* IET (Linux)
+* ISTGT (FreeBSD)
+* Comstar (Solaris)
+
+NOTE: This plugin needs a ZFS capable remote storage appliance, you cannot use
+it to create a ZFS Pool on a regular Storage Appliance/SAN
+
+
+Configuration
+~
+
+In order to use the ZFS over iSCSI plugin you need to configure the remote
+machine (target) to accept `ssh` connections from the {pve} node. {pve} 
connects to the target for creating the ZVOLs and exporting them via iSCSI.
+Authentication is done through a ssh-key (without password protection) stored 
in
+`/etc/pve/priv/zfs/_id_rsa`
+
+The following steps create a ssh-key and distribute it to the storage machine
+with IP 192.0.2.1:
+
+
+mkdir /etc/pve/priv/zfs
+ssh-keygen -f /etc/pve/priv/zfs/192.0.2.1_id_rsa
+ssh-copy-id -i /etc/pve/priv/zfs/192.0.2.1_id_rsa.pub root@192.0.2.1
+ssh -i /etc/pve/priv/zfs/192.0.2.1_id_rsa root@192.0.2.1
+
+
+The backend supports the common storage properties `content`, `nodes`,
+`disable`, and the following ZFS over ISCSI specific properties:
+
+pool::
+
+The ZFS pool/filesystem on the iSCSI target. All allocations are done within 
that
+pool.
+
+portal::
+
+iSCSI portal (IP or DNS name with optional port).
+
+target::
+
+iSCSI target.
+
+iscsiprovider::
+
+The iSCSI target implementation used on the remote machine
+
+comstar_tg::
+
+target group for comstar views.
+
+comstar_hg::
+
+host group for comstar views.
+
+lio_tpg::
+
+target portal group for Linux LIO targets
+
+nowritecache::
+
+disable write caching on the target
+
+blocksize::
+
+Set ZFS blocksize parameter.
+
+sparse::
+
+Use ZFS thin-provisioning. A sparse volume is a volume whose
+reservation is not equal to the volume size.
+
+
+.Configuration Examples (`/etc/pve/storage.cfg`)
+
+zfs: lio
+   blocksize 4k
+   iscsiprovider LIO
+   pool tank
+   portal 192.0.2.111
+   target iqn.2003-01.org.linux-iscsi.lio.x8664:sn.
+   content images
+   lio_tpg tpg1
+   sparse 1
+
+zfs: solaris
+   blocksize 4k
+   target iqn.2010-08.org.illumos:02:----:tank1
+   pool tank
+   iscsiprovider comstar
+   portal 192.0.2.112
+   content images
+
+zfs: freebsd
+   blocksize 4k
+   target iqn.2007-09.jp.ne.peach.istgt:tank1
+   pool tank
+   iscsiprovider istgt
+   portal 192.0.2.113
+   content images
+
+zfs: iet
+   blocksize 4k
+   target iqn.2001-04.com.example:tank1
+   pool tank
+   iscsiprovider iet
+   portal 192.0.2.114
+   content images
+
+
+Storage Features
+
+
+The ZFS over iSCSI plugin provides a shared storage, which is capable of
+snapshots. You need to make sure that the ZFS appliance does not become a 
single
+point of failure in your deployment.
+
+.Storage features for backend `iscsi`
+[width="100%",cols="m,m,3*d",options="header"]
+|==
+|Content types  |Image formats  |Shared |Snapshots |Clones
+|images |raw|yes|yes|no
+|==
diff --git a/pvesm.adoc b/pvesm.adoc
index c8e2347..98c8c44 100644
--- a/pvesm.adoc
+++ b/pvesm.adoc
@@ -436,6 +436,8 @@ include::pve-storage-cephfs.adoc[]
 
 include::pve-storage-btrfs.adoc[]
 
+include::pve-storage-zfs.adoc[]
+
 
 ifdef::manvolnum[]
 include::pve-copyright.adoc[]
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel