Re: [pve-devel] [PATCH v2 guest-common 2/2] snapshots: abort if new snapshot name is already parent to existing one

2021-10-14 Thread Fabian Ebner

Am 13.10.21 um 14:31 schrieb Oguz Bektas:

Signed-off-by: Oguz Bektas 
---
  src/PVE/AbstractConfig.pm | 12 ++--
  1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/src/PVE/AbstractConfig.pm b/src/PVE/AbstractConfig.pm
index 3348d8a..6849664 100644
--- a/src/PVE/AbstractConfig.pm
+++ b/src/PVE/AbstractConfig.pm
@@ -721,14 +721,22 @@ sub __snapshot_prepare {
  
  	$conf->{lock} = 'snapshot';
  
+	my $snapshots = $conf->{snapshots};

+
die "snapshot name '$snapname' already used\n"
-   if defined($conf->{snapshots}->{$snapname});
+   if defined($snapshots->{$snapname});
  
  	my $storecfg = PVE::Storage::config();

die "snapshot feature is not available\n"
if !$class->has_feature('snapshot', $conf, $storecfg, undef, undef, 
$snapname eq 'vzdump');
  
-	$snap = $conf->{snapshots}->{$snapname} = {};

+   foreach my $existing_snapshot (keys %$snapshots) {
+   my $parent_name = $snapshots->{$existing_snapshot}->{parent} // '';
+   die "snapshot '$snapname' cannot be used, $snapname already a parent for 
$existing_snapshot\n"
+   if $snapname eq $parent_name;
+   }


Personally, I'd prefer the "automatically fix it up"-approach, because 
we we're the ones introducing the invalid property in the first place.


But if we go for the "error out"-approach, the error message should be 
different. You're telling the user that the snapshot name cannot be 
used, but they'll just wonder why. Instead, the error should rather tell 
the user that manual fix-up is required and that the parent property is 
invalid, because the parent doesn't exist.



+
+   $snap = $snapshots->{$snapname} = {};
  
  	if ($save_vmstate && $class->__snapshot_check_running($vmid)) {

$class->__snapshot_save_vmstate($vmid, $conf, $snapname, $storecfg);




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 docs] pct: add section for supported distributions

2021-10-14 Thread Oguz Bektas
adds a simple section with a list of officially supported distributions

Signed-off-by: Oguz Bektas 
---
v1->v2:
* removed list of releases, instead put link to releases page for distro (if it 
exists)
* added simple descriptions from respective websites of distros
* put centos and derivatives into single heading


 pct.adoc | 125 ---
 1 file changed, 119 insertions(+), 6 deletions(-)

diff --git a/pct.adoc b/pct.adoc
index d01e6d7..b504249 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -84,13 +84,126 @@ Technology Overview
 
 * Modern Linux kernels
 
-* Image based deployment (templates)
+* Image based deployment (xref:pct_supported_distributions[templates])
 
 * Uses {pve} xref:chapter_storage[storage library]
 
 * Container setup from host (network, DNS, storage, etc.)
 
 
+[[pct_supported_distributions]]
+Supported Distributions
+~~~
+
+List of officially supported distributions can be found below.
+
+Templates for the following distributions are available through our
+repositories. You can use xref:pct_container_images[pveam] tool or the
+Graphical User Interface to download them.
+
+Alpine Linux
+
+
+"Alpine Linux is a security-oriented, lightweight Linux distribution based on
+musl libc and busybox."
+
+https://alpinelinux.org
+
+https://alpinelinux.org/releases/
+
+Archlinux
+^
+
+"a lightweight and flexible Linux® distribution that tries to Keep It Simple."
+
+https://wiki.archlinux.org/title/Arch_Linux
+
+
+CentOS, Almalinux, Rocky Linux
+^^
+
+__CentOS__: "The CentOS Linux distribution is a stable, predictable, 
manageable and
+reproducible platform derived from the sources of Red Hat Enterprise Linux
+(RHEL)"
+
+https://centos.org
+
+https://wiki.centos.org/About/Product
+
+__Almalinux__: "An Open Source, community owned and governed, forever-free
+enterprise Linux distribution, focused on long-term stability, providing a
+robust production-grade platform. AlmaLinux OS is 1:1 binary compatible with
+RHEL® and pre-Stream CentOS."
+
+https://almalinux.org
+
+https://en.wikipedia.org/wiki/AlmaLinux#Releases
+
+__Rocky Linux__: "Rocky Linux is a community enterprise operating system 
designed
+to be 100% bug-for-bug compatible with America's top enterprise Linux
+distribution now that its downstream partner has shifted direction."
+
+https://rockylinux.org/
+
+https://en.wikipedia.org/wiki/Rocky_Linux#Releases
+
+
+Debian
+^^
+
+"Debian is a free operating system, developed and maintained by the Debian
+project. A free Linux distribution with thousands of applications to meet our
+users' needs."
+
+https://www.debian.org/intro/index#software
+
+https://www.debian.org/releases/stable/releasenotes
+
+Devuan
+^^
+
+"Devuan GNU+Linux is a fork of Debian without systemd that allows users to
+reclaim control over their system by avoiding unnecessary entanglements and
+ensuring Init Freedom."
+
+https://www.devuan.org
+
+Fedora
+^^
+
+"Fedora creates an innovative, free, and open source platform for hardware,
+clouds, and containers that enables software developers and community members
+to build tailored solutions for their users."
+
+https://getfedora.org
+
+https://fedoraproject.org/wiki/Releases
+
+Gentoo
+^^
+
+"a highly flexible, source-based Linux distribution."
+
+https://www.gentoo.org
+
+OpenSUSE
+
+
+"The makers' choice for sysadmins, developers and desktop users."
+
+https://www.opensuse.org
+
+https://get.opensuse.org/leap/
+
+Ubuntu
+^^
+
+"The world’s most popular Linux for desktop computing."
+
+https://docs.ubuntu.com/
+
+https://wiki.ubuntu.com/Releases
+
 [[pct_container_images]]
 Container Images
 
@@ -98,11 +211,11 @@ Container Images
 Container images, sometimes also referred to as ``templates'' or
 ``appliances'', are `tar` archives which contain everything to run a container.
 
-{pve} itself provides a variety of basic templates for the most common Linux
-distributions. They can be downloaded using the GUI or the `pveam` (short for
-{pve} Appliance Manager) command line utility.
-Additionally, https://www.turnkeylinux.org/[TurnKey Linux] container templates
-are also available to download.
+{pve} itself provides a variety of basic templates for the
+xref:pct_supported_distributions[most common Linux distributions].  They can be
+downloaded using the GUI or the `pveam` (short for {pve} Appliance Manager)
+command line utility.  Additionally, https://www.turnkeylinux.org/[TurnKey
+Linux] container templates are also available to download.
 
 The list of available templates is updated daily through the 'pve-daily-update'
 timer. You can also trigger an update manually by executing:
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH v4 storage 1/1] fix #3580: plugins: make preallocation mode selectable for qcow2 and raw images

2021-10-14 Thread Thomas Lamprecht
On 12.10.21 14:32, Lorenz Stechauner wrote:
> the plugins for file based storages
>  * BTRFS
>  * CIFS
>  * Dir
>  * Glusterfs
>  * NFS
> now allow the option 'preallocation'.
> 
> 'preallocation' can have four values:
>  * default
>  * off
>  * metadata
>  * falloc
>  * full
> see man pages for `qemu-img` for what these mean exactly. [0]
> 
> the defualt value was chosen to be
>  * qcow2: metadata (as previously)
>  * raw: off
> 
> when using 'metadata' as preallocation mode, for raw images 'off'
> is used.
> 
> [0] 
> https://qemu.readthedocs.io/en/latest/system/images.html#disk-image-file-formats
> 
> Signed-off-by: Lorenz Stechauner 
> Reviewed-by: Fabian Ebner 
> Tested-by: Fabian Ebner 
> ---
>  PVE/Storage/BTRFSPlugin.pm |  1 +
>  PVE/Storage/CIFSPlugin.pm  |  1 +
>  PVE/Storage/DirPlugin.pm   |  1 +
>  PVE/Storage/GlusterfsPlugin.pm |  4 ++-
>  PVE/Storage/NFSPlugin.pm   |  1 +
>  PVE/Storage/Plugin.pm  | 48 +-
>  6 files changed, 54 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 1/2] snapshot: fix tpmstate with rbd

2021-10-14 Thread Stefan Reiter
QEMU doesn't know about the tpmstate, so 'do_snapshots_with_qemu' should
never return true in that case. Note that inconsistencies related to
snapshot timing do not matter much, as the actual TPM data is exported
together with other device state by QEMU anyway.

Signed-off-by: Stefan Reiter 
---

As reported in the forum: 
https://forum.proxmox.com/threads/vtpm-support-do-we-have-guide-to-add-the-vtpm-support.56982/post-423381

 PVE/QemuServer.pm | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6fc28e8..d5dfdbb 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4542,7 +4542,7 @@ sub qemu_volume_snapshot {
 
 my $running = check_running($vmid);
 
-if ($running && do_snapshots_with_qemu($storecfg, $volid)){
+if ($running && do_snapshots_with_qemu($storecfg, $volid, $deviceid)) {
mon_cmd($vmid, 'blockdev-snapshot-internal-sync', device => $deviceid, 
name => $snap);
 } else {
PVE::Storage::volume_snapshot($storecfg, $volid, $snap);
@@ -4564,7 +4564,7 @@ sub qemu_volume_snapshot_delete {
});
 }
 
-if ($running && do_snapshots_with_qemu($storecfg, $volid)){
+if ($running && do_snapshots_with_qemu($storecfg, $volid, $deviceid)) {
mon_cmd($vmid, 'blockdev-snapshot-delete-internal-sync', device => 
$deviceid, name => $snap);
 } else {
PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snap, 
$running);
@@ -7017,7 +7017,9 @@ my $qemu_snap_storage = {
 rbd => 1,
 };
 sub do_snapshots_with_qemu {
-my ($storecfg, $volid) = @_;
+my ($storecfg, $volid, $deviceid) = @_;
+
+return if $deviceid =~ m/tpmstate0/;
 
 my $storage_name = PVE::Storage::parse_volume_id($volid);
 my $scfg = $storecfg->{ids}->{$storage_name};
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 container 1/2] api: clone_vm: don't include snapshot properties

2021-10-14 Thread Oguz Bektas
apparently this caused a weird[0] bug... when a container with a snapshot was
cloned, it would take 'parent: foo' from the original container. if you
add a new snapshot 'bar', and then another one 'foo', this causes the
snapshots to become parents of each other (thus not parsed correctly in
the tree view of GUI nor with 'pct listsnapshot CTID')

we also drop these properties for VMs, so it makes sense to do the same
here as well.

[0]: https://forum.proxmox.com/threads/snapshots-of-one-lxc-disappeared.97711/

Signed-off-by: Oguz Bektas 
---
v2->v3:
* no changes


 src/PVE/API2/LXC.pm | 5 +
 1 file changed, 5 insertions(+)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 1f2f1f0..05cfbad 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1512,7 +1512,12 @@ __PACKAGE__->register_method({
# Replace the 'disk' lock with a 'create' lock.
$newconf->{lock} = 'create';
 
+   # delete all snapshot related config options
delete $newconf->{snapshots};
+   delete $newconf->{parent};
+   delete $newconf->{snaptime};
+   delete $newconf->{snapstate};
+
delete $newconf->{pending};
delete $newconf->{template};
if ($param->{hostname}) {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 guest-common 2/2] snapshots: delete parent property if new snapshot name is already a parent to existing one

2021-10-14 Thread Oguz Bektas
Signed-off-by: Oguz Bektas 
---
v2->v3:
* automatically delete the 'parent' property for an existing snapshot
(instead of aborting) if its the same as the new snapshot name (and the
snapshot referenced by 'parent' is not used)



 src/PVE/AbstractConfig.pm | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/src/PVE/AbstractConfig.pm b/src/PVE/AbstractConfig.pm
index 3348d8a..e4ddeab 100644
--- a/src/PVE/AbstractConfig.pm
+++ b/src/PVE/AbstractConfig.pm
@@ -721,14 +721,21 @@ sub __snapshot_prepare {
 
$conf->{lock} = 'snapshot';
 
+   my $snapshots = $conf->{snapshots};
+
die "snapshot name '$snapname' already used\n"
-   if defined($conf->{snapshots}->{$snapname});
+   if defined($snapshots->{$snapname});
 
my $storecfg = PVE::Storage::config();
die "snapshot feature is not available\n"
if !$class->has_feature('snapshot', $conf, $storecfg, undef, undef, 
$snapname eq 'vzdump');
 
-   $snap = $conf->{snapshots}->{$snapname} = {};
+   foreach my $existing_snapshot (keys %$snapshots) {
+   my $parent_name = $snapshots->{$existing_snapshot}->{parent} // '';
+   delete $snapshots->{$existing_snapshot}->{parent} if $snapname eq 
$parent_name;
+   }
+
+   $snap = $snapshots->{$snapname} = {};
 
if ($save_vmstate && $class->__snapshot_check_running($vmid)) {
$class->__snapshot_save_vmstate($vmid, $conf, $snapname, $storecfg);
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 2/2] swtpm: wait for pidfile

2021-10-14 Thread Stefan Reiter
swtpm may take a little bit to daemonize, so the pidfile might not be
available right after run_command. Causes an ugly warning about using an
undefined value in a match, so wait up to 5s for it to appear.

Note that in testing this loop only ever got to the first or second
iteration, so I believe the timeout duration should be more than enough.

Also add a missing 'usleep' import, 'usleep' was used before but never
imported, apparently the other case never got triggered...

Signed-off-by: Stefan Reiter 
---

Triggered during rollback testing, potentially due to extra load. Didn't cause
any failure (as long as the VM starts fine it's all good), but looks ugly in the
log and may cause lingering instances if QEMU fails afterward.

 PVE/QemuServer.pm | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index d5dfdbb..7a7cdb0 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -22,7 +22,7 @@ use JSON;
 use MIME::Base64;
 use POSIX;
 use Storable qw(dclone);
-use Time::HiRes qw(gettimeofday);
+use Time::HiRes qw(gettimeofday usleep);
 use URI::Escape;
 use UUID;
 
@@ -3059,6 +3059,14 @@ sub start_swtpm {
 push @$emulator_cmd, "--tpm2" if $tpm->{version} eq 'v2.0';
 run_command($emulator_cmd, outfunc => sub { print $1; });
 
+# swtpm may take a bit to start before daemonizing, wait up to 5s for pid
+my $tries = 100;
+while (! -e $paths->{pid}) {
+   usleep(5);
+   die "failed to start swtpm: pid file '$paths->{pid}' wasn't created.\n"
+   if --$tries == 0;
+}
+
 # return untainted PID of swtpm daemon so it can be killed on error
 file_read_firstline($paths->{pid}) =~ m/(\d+)/;
 return $1;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 container guest-common 0/2] snapshot parent checks for containers

2021-10-14 Thread Oguz Bektas
v2->v3:
* automatically delete the 'parent' property for an existing snapshot
(instead of aborting) if its the same as the new snapshot name (and the
snapshot referenced by 'parent' is not used)

pve-container:
Oguz Bektas (1):
  api: clone_vm: don't include snapshot properties

 src/PVE/API2/LXC.pm | 5 +
 1 file changed, 5 insertions(+)

pve-guest-common:
Oguz Bektas (1):
  snapshots: delete parent property if new snapshot name is already a
parent to existing one

 src/PVE/AbstractConfig.pm | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v2 container 1/2] api: clone_vm: don't include snapshot properties

2021-10-14 Thread Thomas Lamprecht
On 13.10.21 14:31, Oguz Bektas wrote:
> apparently this caused a weird[0] bug... when a container with a snapshot was
> cloned, it would take 'parent: foo' from the original container. if you
> add a new snapshot 'bar', and then another one 'foo', this causes the
> snapshots to become parents of each other (thus not parsed correctly in
> the tree view of GUI nor with 'pct listsnapshot CTID')
> 
> we also drop these properties for VMs, so it makes sense to do the same
> here as well.
> 
> [0]: https://forum.proxmox.com/threads/snapshots-of-one-lxc-disappeared.97711/
> 
> Signed-off-by: Oguz Bektas 
> ---
>  src/PVE/API2/LXC.pm | 5 +
>  1 file changed, 5 insertions(+)
> 
>

applied this one already, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH/RFC v3 pve-container 1/1] restore: sanitize config: use new warn() function

2021-10-14 Thread Thomas Lamprecht
On 08.07.21 11:14, Fabian Ebner wrote:
> to make it more visible that the task finished with warnings.
> 
> Signed-off-by: Fabian Ebner 
> ---
> 
> Dependency bump for pve-common needed.
> 
> No changes from v2.
> 
>  src/PVE/LXC/Create.pm | 15 +++
>  1 file changed, 11 insertions(+), 4 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel