[pve-devel] [PATCH v2 guest-common 1/2] partially fix 3111: snapshot rollback: improve removing replication snapshots

2021-06-09 Thread Fabian Ebner
Get the replicatable volumes from the snapshot config rather than the current
config. And filter those volumes further to those that will actually be rolled
back.

Previously, a volume that only had replication snapshots (e.g. because it was
added after the snapshot was taken, or the vmstate volume) would lose them.
Then, on the next replication run, such a volume would lead to an error, because
replication tried to do a full sync, but the target volume still exists.

Should be enough for most real-world scenarios, but not a complete fix:
It is still possible to run into the problem by removing the last
(non-replication) snapshots after a rollback before replication can run once.

The list of volumes is not required to be sorted for prepare(), but it is sorted
by how foreach_volume() iterates now, so not random.

Signed-off-by: Fabian Ebner 
---

Changes from v1:
* dropped already applied patch
* rebased on top of the new filename (since a src/ prefix was added)
* add another comment mentioning why the additional filtering is necessary

 src/PVE/AbstractConfig.pm | 18 ++
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/src/PVE/AbstractConfig.pm b/src/PVE/AbstractConfig.pm
index 3348d8a..6542ae4 100644
--- a/src/PVE/AbstractConfig.pm
+++ b/src/PVE/AbstractConfig.pm
@@ -974,13 +974,23 @@ sub snapshot_rollback {
if ($prepare) {
my $repl_conf = PVE::ReplicationConfig->new();
if ($repl_conf->check_for_existing_jobs($vmid, 1)) {
-   # remove all replication snapshots
-   my $volumes = $class->get_replicatable_volumes($storecfg, 
$vmid, $conf, 1);
-   my $sorted_volids = [ sort keys %$volumes ];
+   # remove replication snapshots on volumes affected by rollback 
*only*!
+   my $volumes = $class->get_replicatable_volumes($storecfg, 
$vmid, $snap, 1);
+
+   # filter by what we actually iterate over below (excludes 
vmstate!)
+   my $volids = [];
+   $class->foreach_volume($snap, sub {
+   my ($vs, $volume) = @_;
+
+   my $volid_key = $class->volid_key();
+   my $volid = $volume->{$volid_key};
+
+   push @{$volids}, $volid if $volumes->{$volid};
+   });
 
# remove all local replication snapshots (jobid => undef)
my $logfunc = sub { my $line = shift; chomp $line; print 
"$line\n"; };
-   PVE::Replication::prepare($storecfg, $sorted_volids, undef, 1, 
undef, $logfunc);
+   PVE::Replication::prepare($storecfg, $volids, undef, 1, undef, 
$logfunc);
}
 
$class->foreach_volume($snap, sub {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 guest-common 2/2] fix 3111: replicate guest on rollback if there are replication jobs for it

2021-06-09 Thread Fabian Ebner
so that there will be a valid replication snapshot again.

Otherwise, replication will be broken after a rollback if the last
(non-replication) snapshot is removed before replication can run again.

Signed-off-by: Fabian Ebner 
---

No changes from v1 (except rebase).

Not a huge fan of this, but the alternatives I could come up with don't seem
much better IMHO:

1. Invalidate/remove replicated volumes after a rollback altogether and require
a full sync on the next replication job afterwards.

2. Another one is to disallow removing the last non-replication snapshot if:
* there is a replication job configured
* no replication snapshot for that job currently exists
  (which likely means it was removed by a previous rollback operation, but
  can also happen for a new job that didn't run yet).

3. Hope not very many people immediately delete their snapshots after rollback.

Pick a favorite or suggest your own ;)

 src/PVE/AbstractConfig.pm| 19 +--
 src/PVE/ReplicationConfig.pm | 14 ++
 2 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/src/PVE/AbstractConfig.pm b/src/PVE/AbstractConfig.pm
index 6542ae4..6cc0537 100644
--- a/src/PVE/AbstractConfig.pm
+++ b/src/PVE/AbstractConfig.pm
@@ -951,6 +951,9 @@ sub snapshot_rollback {
 
 my $storecfg = PVE::Storage::config();
 
+my $repl_conf = PVE::ReplicationConfig->new();
+my $logfunc = sub { my $line = shift; chomp $line; print "$line\n"; };
+
 my $data = {};
 
 my $get_snapshot_config = sub {
@@ -972,7 +975,6 @@ sub snapshot_rollback {
$snap = $get_snapshot_config->($conf);
 
if ($prepare) {
-   my $repl_conf = PVE::ReplicationConfig->new();
if ($repl_conf->check_for_existing_jobs($vmid, 1)) {
# remove replication snapshots on volumes affected by rollback 
*only*!
my $volumes = $class->get_replicatable_volumes($storecfg, 
$vmid, $snap, 1);
@@ -989,7 +991,6 @@ sub snapshot_rollback {
});
 
# remove all local replication snapshots (jobid => undef)
-   my $logfunc = sub { my $line = shift; chomp $line; print 
"$line\n"; };
PVE::Replication::prepare($storecfg, $volids, undef, 1, undef, 
$logfunc);
}
 
@@ -1047,6 +1048,20 @@ sub snapshot_rollback {
 
 $prepare = 0;
 $class->lock_config($vmid, $updatefn);
+
+my $replication_jobs = $repl_conf->list_guests_replication_jobs($vmid);
+for my $job (@{$replication_jobs}) {
+   my $target = $job->{target};
+   $logfunc->("replicating rolled back guest to node '$target'");
+
+   my $start_time = time();
+   eval {
+   PVE::Replication::run_replication($class, $job, $start_time, 
$start_time, $logfunc);
+   };
+   if (my $err = $@) {
+   warn "unable to replicate rolled back guest to node '$target' - 
$err";
+   }
+}
 }
 
 # bash completion helper
diff --git a/src/PVE/ReplicationConfig.pm b/src/PVE/ReplicationConfig.pm
index fd856a0..84a718f 100644
--- a/src/PVE/ReplicationConfig.pm
+++ b/src/PVE/ReplicationConfig.pm
@@ -228,6 +228,20 @@ sub find_local_replication_job {
 return undef;
 }
 
+sub list_guests_replication_jobs {
+my ($cfg, $vmid) = @_;
+
+my $jobs = [];
+
+for my $job (values %{$cfg->{ids}}) {
+   next if $job->{type} ne 'local' || $job->{guest} != $vmid;
+
+   push @{$jobs}, $job;
+}
+
+return $jobs;
+}
+
 # makes old_target the new source for all local jobs of this guest
 # makes new_target the target for the single local job with target old_target
 sub switch_replication_job_target_nolock {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-partially: [PATCH-SERIES] Some breaking API changes/cleanups

2021-06-09 Thread Fabian Grünbichler
applied (with some added version bumps/versioned dependency bumps), 
except for the two storage patches where I replied with nits.

please add stuff to breaking changes and send patches for pve6to7 where 
appropriate!

On June 4, 2021 3:49 pm, Fabian Ebner wrote:
> 
> Of course all of the changes need to be recorded in the release notes.
> Should I do so now or wait until the changes are actually applied?
> 
> Dependencies are noted on individual patches, most should apply independently.
> 
> There's still quite a few outstanding, but I feel like somebody with more
> experience/context would be better suited to address them. I can go for 
> another
> round though if there are not enough volunteers ;)
> 
> 
> pve-access-control:
> src/PVE/Auth/AD.pm
> 30: description => "Use secure LDAPS protocol. DEPRECATED: use 'mode' 
> instead.",
> src/PVE/Auth/LDAP.pm
> 390:# FIXME: remove fallback with 7.0 by doing a rename on upgrade from 
> 6.x
> 
> src/PVE/AccessControl.pm
> 299:# FIXME: remove with PVE 7 and/or refactor all into PVE::Ticket ?
> 
> pve-common:
> src/PVE/Ticket.pm
> 38: # detected sha1 csrf token from older proxy, fallback. FIXME: 
> remove with 7.0
> 
> pve-cluster:
> quite a few // FIXME: remove openvz stuff for 7.x
> 
> pve-storage:
> PVE/Storage/CephFSPlugin.pm
> 41:# FIXME: remove in PVE 7.0 where systemd is recent enough to not have those
> # local-fs/remote-fs dependency cycles generated for _netdev mounts...
> 
> 
> Would be glad if somebody experienced in those areas could look at them.
> 
> 
> 
> data/PVE/DataCenterConfig.pm has a few workarounds for deprecated options
> 
> pve-container:
> src/PVE/LXC/Config.pm
> 652:# Deprecated (removed with lxc 3.0):
> 
> src/PVE/LXC.pm
> 2178:   # FIXME: remove the 'id_map' variant when lxc-3.0 arrives
> 
> 
> There might still be lxc/datacenter configs with the old options around?
> Do we want to warn/replace as part of the pve6to7 script or just keep
> the compat code longer?
> 
> 
> 
> pve-manager:
> PVE/Ceph/Services.pm
> 54: # FIXME: remove with 7.0 - for backward compat only
> 
> PVE/API2/Cluster/Ceph.pm
> 78: # FIXME: remove with 7.0 depreacated by structured 'versions'
> 
> 
> Front-end still expects the metadata.version to be present. Might not be hard 
> to
> adapt, but I didn't have the context to immediately know what's the right 
> thing
> to do. Can look at it if nobody else wants to.
> 
> 
> 
> pve-storage:
> PVE/Storage.pm
> 966:# FIXME PVE 7.0: only scan storages with the correct content types
> 
> 
> From an off-list discussion with Fabian G: We currently still need this 
> behavior
> to pick up disks from storages with a misconfigured content types. Meaning 
> that
> we need to require content type 'images' to be set first and warn people with
> misconfigured storages as part of pve6to7. We could even go further and not 
> pick
> up unreferenced disks at all, but should expose rescan in the UI and make a
> rescan as part of pve6to7. I will look at this in a separate series.
> 
> 
> 
> pve-guest-common:
> 
> Fabian Ebner (2):
>   vzdump: remove deprecated size parameter
>   vzdump: defaults: keep all backups by default for 7.0
> 
>  src/PVE/VZDump/Common.pm | 17 +
>  1 file changed, 5 insertions(+), 12 deletions(-)
> 
> 
> pve-container:
> 
> Fabian Ebner (1):
>   migrate: remove deprecated force parameter
> 
>  src/PVE/API2/LXC.pm| 6 --
>  src/PVE/LXC/Migrate.pm | 8 +---
>  2 files changed, 1 insertion(+), 13 deletions(-)
> 
> 
> qemu-server:
> 
> Fabian Ebner (3):
>   Revert "revert spice_ticket prefix change in 7827de4"
>   scan volids: remove superfluous parameter
>   vm destroy: do not remove unreferenced disks by default
> 
>  PVE/API2/Qemu.pm   | 5 ++---
>  PVE/QemuMigrate.pm | 5 ++---
>  PVE/QemuServer.pm  | 7 +++
>  3 files changed, 7 insertions(+), 10 deletions(-)
> 
> 
> pve-storage:
> 
> Fabian Ebner (3):
>   postinst: move cifs credential files into subdirectory upon update
>   update reminder to remove maxfiles
>   api: get rid of moved 'usb' call
> 
>  PVE/API2/Storage/Scan.pm  | 45 ---
>  PVE/Storage.pm|  2 +-
>  PVE/Storage/CIFSPlugin.pm |  3 ---
>  debian/postinst   | 30 ++
>  4 files changed, 31 insertions(+), 49 deletions(-)
>  create mode 100644 debian/postinst
> 
> 
> novnc-pve:
> 
> Fabian Ebner (1):
>   avoid passing deprecated 'upgrade' parameter
> 
>  ...passing-deprecated-upgrade-parameter.patch | 24 +++
>  debian/patches/series |  1 +
>  2 files changed, 25 insertions(+)
>  create mode 100644 
> debian/patches/0014-avoid-passing-deprecated-upgrade-parameter.patch
> 
> pve-manager:
> 
> Fabian Ebner (12):
>   vzdump: remove deprecated size parameter
>   configs: vzdump: use prune-backups instead of the deprecated maxfiles
>   test: vzdump: adapt to new default
>   Revert "VZDump: add TARFILE to environment for hook

[pve-devel] [PATCH storage] api: status: fix unlink on file upload

2021-06-09 Thread Lorenz Stechauner
after an error while copying the file to its destination the local
path of the destination was unlinked in every case, even when on the
destination was copied to via scp.

Signed-off-by: Lorenz Stechauner 
---
 PVE/API2/Storage/Status.pm | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Storage/Status.pm b/PVE/API2/Storage/Status.pm
index 897b4a7..bcc7722 100644
--- a/PVE/API2/Storage/Status.pm
+++ b/PVE/API2/Storage/Status.pm
@@ -446,6 +446,7 @@ __PACKAGE__->register_method ({
# we simply overwrite the destination file if it already exists
 
my $cmd;
+   my $err_cmd;
if ($node ne 'localhost' && $node ne PVE::INotify::nodename()) {
my $remip = PVE::Cluster::remote_node_ip($node);
 
@@ -464,10 +465,12 @@ __PACKAGE__->register_method ({
errmsg => "mkdir failed");
  
$cmd = ['/usr/bin/scp', @ssh_options, '-p', '--', $tmpfilename, 
"[$remip]:" . PVE::Tools::shell_quote($dest)];
+   $err_cmd = [@remcmd, '/bin/unlink', $dest];
} else {
PVE::Storage::activate_storage($cfg, $param->{storage});
File::Path::make_path($dirname);
$cmd = ['cp', '--', $tmpfilename, $dest];
+   $err_cmd = ['/bin/unlink', $dest];
}
 
my $worker = sub {
@@ -481,7 +484,7 @@ __PACKAGE__->register_method ({
 
eval { PVE::Tools::run_command($cmd, errmsg => 'import failed'); };
if (my $err = $@) {
-   unlink $dest;
+   eval { PVE::Tools::run_command($err_cmd); };
die $err;
}
print "finished file import successfully\n";
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] applied-partially: [PATCH-SERIES] Some breaking API changes/cleanups

2021-06-09 Thread Fabian Ebner

Am 09.06.21 um 11:44 schrieb Fabian Grünbichler:

applied (with some added version bumps/versioned dependency bumps),
except for the two storage patches where I replied with nits.

please add stuff to breaking changes and send patches for pve6to7 where
appropriate!



I added entries for the breaking changes in the intranet wiki.

There's not really much for pve6to7, we could:

1. Warn when there's a backup storage without retention settings on a 
node without retention settings, which could lead to a storage 
unexpectedly filling up, because of the new default.


2. Warn if there's CIFS credentials in the old location already pre-upgrade.

Should I go for these, or what did you have in mind?


On June 4, 2021 3:49 pm, Fabian Ebner wrote:


Of course all of the changes need to be recorded in the release notes.
Should I do so now or wait until the changes are actually applied?

Dependencies are noted on individual patches, most should apply independently.

There's still quite a few outstanding, but I feel like somebody with more
experience/context would be better suited to address them. I can go for another
round though if there are not enough volunteers ;)


pve-access-control:
src/PVE/Auth/AD.pm
30: description => "Use secure LDAPS protocol. DEPRECATED: use 'mode' 
instead.",
src/PVE/Auth/LDAP.pm
390:# FIXME: remove fallback with 7.0 by doing a rename on upgrade from 6.x

src/PVE/AccessControl.pm
299:# FIXME: remove with PVE 7 and/or refactor all into PVE::Ticket ?

pve-common:
src/PVE/Ticket.pm
38: # detected sha1 csrf token from older proxy, fallback. FIXME: 
remove with 7.0

pve-cluster:
quite a few // FIXME: remove openvz stuff for 7.x

pve-storage:
PVE/Storage/CephFSPlugin.pm
41:# FIXME: remove in PVE 7.0 where systemd is recent enough to not have those
# local-fs/remote-fs dependency cycles generated for _netdev mounts...


Would be glad if somebody experienced in those areas could look at them.



data/PVE/DataCenterConfig.pm has a few workarounds for deprecated options

pve-container:
src/PVE/LXC/Config.pm
652:# Deprecated (removed with lxc 3.0):

src/PVE/LXC.pm
2178:   # FIXME: remove the 'id_map' variant when lxc-3.0 arrives


There might still be lxc/datacenter configs with the old options around?
Do we want to warn/replace as part of the pve6to7 script or just keep
the compat code longer?



pve-manager:
PVE/Ceph/Services.pm
54: # FIXME: remove with 7.0 - for backward compat only

PVE/API2/Cluster/Ceph.pm
78: # FIXME: remove with 7.0 depreacated by structured 'versions'


Front-end still expects the metadata.version to be present. Might not be hard to
adapt, but I didn't have the context to immediately know what's the right thing
to do. Can look at it if nobody else wants to.



pve-storage:
PVE/Storage.pm
966:# FIXME PVE 7.0: only scan storages with the correct content types


 From an off-list discussion with Fabian G: We currently still need this 
behavior
to pick up disks from storages with a misconfigured content types. Meaning that
we need to require content type 'images' to be set first and warn people with
misconfigured storages as part of pve6to7. We could even go further and not pick
up unreferenced disks at all, but should expose rescan in the UI and make a
rescan as part of pve6to7. I will look at this in a separate series.



pve-guest-common:

Fabian Ebner (2):
   vzdump: remove deprecated size parameter
   vzdump: defaults: keep all backups by default for 7.0

  src/PVE/VZDump/Common.pm | 17 +
  1 file changed, 5 insertions(+), 12 deletions(-)


pve-container:

Fabian Ebner (1):
   migrate: remove deprecated force parameter

  src/PVE/API2/LXC.pm| 6 --
  src/PVE/LXC/Migrate.pm | 8 +---
  2 files changed, 1 insertion(+), 13 deletions(-)


qemu-server:

Fabian Ebner (3):
   Revert "revert spice_ticket prefix change in 7827de4"
   scan volids: remove superfluous parameter
   vm destroy: do not remove unreferenced disks by default

  PVE/API2/Qemu.pm   | 5 ++---
  PVE/QemuMigrate.pm | 5 ++---
  PVE/QemuServer.pm  | 7 +++
  3 files changed, 7 insertions(+), 10 deletions(-)


pve-storage:

Fabian Ebner (3):
   postinst: move cifs credential files into subdirectory upon update
   update reminder to remove maxfiles
   api: get rid of moved 'usb' call

  PVE/API2/Storage/Scan.pm  | 45 ---
  PVE/Storage.pm|  2 +-
  PVE/Storage/CIFSPlugin.pm |  3 ---
  debian/postinst   | 30 ++
  4 files changed, 31 insertions(+), 49 deletions(-)
  create mode 100644 debian/postinst


novnc-pve:

Fabian Ebner (1):
   avoid passing deprecated 'upgrade' parameter

  ...passing-deprecated-upgrade-parameter.patch | 24 +++
  debian/patches/series |  1 +
  2 files changed, 25 insertions(+)
  create mode 100644 
debian/patches/0014-avoid-passing-deprecated-upgrade-parameter.patch

pve-manager:

Fabian Ebner (12):
   vzd

[pve-devel] [PATCH pve-eslint] set cwd of CLIEngine to process.cwd()

2021-06-09 Thread Dominik Csapak
it seems it now defaults to '/' as the current working dir, but we
assume the cwd of the process

Signed-off-by: Dominik Csapak 
---
 src/app.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/app.js b/src/app.js
index 020d552..9226234 100644
--- a/src/app.js
+++ b/src/app.js
@@ -284,6 +284,7 @@ const cli = new eslint.CLIEngine({
 baseConfig: config,
 useEslintrc: true,
 fix: !!program.fix,
+cwd: process.cwd(),
 });
 
 const report = cli.executeOnFiles(paths);
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 qemu-server 2/7] cloudinit: generate cloudinit drive on offline plug

2021-06-09 Thread Alexandre Derumier
Currently when only generate it at vm start

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 25ac052..6ddac72 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4776,6 +4776,8 @@ sub vmconfig_apply_pending {
 
 PVE::QemuConfig->cleanup_pending($conf);
 
+my $generate_cloudnit = undef;
+
 foreach my $opt (keys %{$conf->{pending}}) { # add/change
next if $opt eq 'delete'; # just to be sure
eval {
@@ -4783,6 +4785,12 @@ sub vmconfig_apply_pending {
vmconfig_register_unused_drive($storecfg, $vmid, $conf, 
parse_drive($opt, $conf->{$opt}))
}
};
+
+   if (is_valid_drivename($opt)) {
+   my $drive = parse_drive($opt, $conf->{pending}->{$opt});
+   $generate_cloudnit = 1 if drive_is_cloudinit($drive);
+   }
+
if (my $err = $@) {
$add_apply_error->($opt, $err);
} else {
@@ -4792,6 +4800,8 @@ sub vmconfig_apply_pending {
 
 # write all changes at once to avoid unnecessary i/o
 PVE::QemuConfig->write_config($vmid, $conf);
+
+PVE::QemuServer::Cloudinit::generate_cloudinitconfig($conf, $vmid) if 
$generate_cloudnit;
 }
 
 sub vmconfig_update_net {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 qemu-server 3/7] cloudinit: make cloudnit options fastplug

2021-06-09 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 30 +++---
 1 file changed, 3 insertions(+), 27 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6ddac72..0be4c45 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4530,9 +4530,10 @@ sub vmconfig_hotplug_pending {
$errors->{$opt} = "hotplug problem - $msg";
 };
 
+my @cloudinit_opts = keys %$confdesc_cloudinit;
 my $changes = 0;
 foreach my $opt (keys %{$conf->{pending}}) { # add/change
-   if ($fast_plug_option->{$opt}) {
+   if ($fast_plug_option->{$opt} || grep { $_ eq $opt } @cloudinit_opts) {
$conf->{$opt} = $conf->{pending}->{$opt};
delete $conf->{pending}->{$opt};
$changes = 1;
@@ -4605,31 +4606,6 @@ sub vmconfig_hotplug_pending {
}
 }
 
-my ($apply_pending_cloudinit, $apply_pending_cloudinit_done);
-$apply_pending_cloudinit = sub {
-   return if $apply_pending_cloudinit_done; # once is enough
-   $apply_pending_cloudinit_done = 1; # once is enough
-
-   my ($key, $value) = @_;
-
-   my @cloudinit_opts = keys %$confdesc_cloudinit;
-   foreach my $opt (keys %{$conf->{pending}}) {
-   next if !grep { $_ eq $opt } @cloudinit_opts;
-   $conf->{$opt} = delete $conf->{pending}->{$opt};
-   }
-
-   my $pending_delete_hash = 
PVE::QemuConfig->parse_pending_delete($conf->{pending}->{delete});
-   foreach my $opt (sort keys %$pending_delete_hash) {
-   next if !grep { $_ eq $opt } @cloudinit_opts;
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   delete $conf->{$opt};
-   }
-
-   my $new_conf = { %$conf };
-   $new_conf->{$key} = $value;
-   PVE::QemuServer::Cloudinit::generate_cloudinitconfig($new_conf, $vmid);
-};
-
 foreach my $opt (keys %{$conf->{pending}}) {
next if $selection && !$selection->{$opt};
my $value = $conf->{pending}->{$opt};
@@ -4676,7 +4652,7 @@ sub vmconfig_hotplug_pending {
# some changes can be done without hotplug
my $drive = parse_drive($opt, $value);
if (drive_is_cloudinit($drive)) {
-   &$apply_pending_cloudinit($opt, $value);
+   PVE::QemuServer::Cloudinit::generate_cloudinitconfig($conf, 
$vmid);
}
vmconfig_update_disk($storecfg, $conf, 
$hotplug_features->{disk},
 $vmid, $opt, $value, $arch, $machine_type);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 qemu-server 5/7] cloudinit : add extract_cloudinit_config

2021-06-09 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/API2/Qemu.pm|  1 +
 PVE/QemuServer/Cloudinit.pm | 47 -
 2 files changed, 47 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index cc58744..8ac3ae3 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1063,6 +1063,7 @@ __PACKAGE__->register_method({
 path => '{vmid}/cloudinit',
 method => 'GET',
 proxyto => 'node',
+protected => '1',
 description => "Get the cloudinit configuration with both current and 
pending values.",
 permissions => {
check => ['perm', '/vms/{vmid}', [ 'VM.Audit' ]],
diff --git a/PVE/QemuServer/Cloudinit.pm b/PVE/QemuServer/Cloudinit.pm
index 156e073..917511e 100644
--- a/PVE/QemuServer/Cloudinit.pm
+++ b/PVE/QemuServer/Cloudinit.pm
@@ -607,11 +607,56 @@ sub dump_cloudinit_config {
 }
 }
 
+sub extract_cloudinit_config {
+my ($conf, $vmid) = @_;
+
+my $current_drive = undef;
+PVE::QemuConfig->foreach_volume($conf, sub {
+   my ($ds, $drive) = @_;
+   $current_drive = $drive if PVE::QemuServer::drive_is_cloudinit($drive);
+
+});
+ 
+my $running = PVE::QemuServer::check_running($vmid);
+my $storecfg = PVE::Storage::config();
+my $iso_path = PVE::Storage::path($storecfg, $current_drive->{file});
+my ($storeid, $volname) = 
PVE::Storage::parse_volume_id($current_drive->{file}, 1);
+my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
+my $format = PVE::QemuServer::qemu_img_format($scfg, $volname);
+my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+$plugin->activate_volume($storeid, $scfg, $volname) if !$running;
+
+my $raw_cloudinit_config = undef;
+
+my $path = "/run/pve/cloudinit_current/$vmid";
+mkpath $path;
+my $parser = sub {
+   my $line = shift;
+   $raw_cloudinit_config .= "$line\n";
+};
+eval {
+   #qemu-img dd is really slower (5s) than convert (0.2s)
+run_command([
+['qemu-img', 'convert', '-f', 'raw', '-O', 'raw', "$iso_path", 
"$path/iso"]
+]);
+
+run_command([
+['isoinfo', '-i', "$path/iso", '-x', "/PROXMOX/VMCONF.\;1"]
+], outfunc => $parser);
+};
+rmtree($path);
+$plugin->deactivate_volume($storeid, $scfg, $volname) if !$running;
+
+my $cloudinit_config = 
PVE::QemuServer::parse_vm_config("/qemu-server/$vmid.conf", 
$raw_cloudinit_config);
+return $cloudinit_config;
+}
+
 sub get_pending_config {
 my ($conf, $vmid) = @_;
 
+
 my $newconf = { %{$conf} };
-my $cloudinit_current = $newconf->{cloudinit};
+my $cloudinit_current = extract_cloudinit_config($conf, $vmid);
 my @cloudinit_opts = keys 
%{PVE::QemuServer::cloudinit_config_properties()};
 push @cloudinit_opts, 'name';
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 qemu-server 7/7] add cloudinit hotplug

2021-06-09 Thread Alexandre Derumier
This allow to regenerate config drive if pending values exist
when we change vm options.

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 15 ---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ff5d473..80b81c7 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -288,7 +288,7 @@ my $confdesc = {
 hotplug => {
 optional => 1,
 type => 'string', format => 'pve-hotplug-features',
-description => "Selectively enable hotplug features. This is a comma 
separated list of hotplug features: 'network', 'disk', 'cpu', 'memory' and 
'usb'. Use '0' to disable hotplug completely. Value '1' is an alias for the 
default 'network,disk,usb'.",
+description => "Selectively enable hotplug features. This is a comma 
separated list of hotplug features: 'network', 'disk', 'cpu', 'memory', 'usb' 
and 'cloudinit'. Use '0' to disable hotplug completely. Value '1' is an alias 
for the default 'network,disk,usb'.",
 default => 'network,disk,usb',
 },
 reboot => {
@@ -1300,7 +1300,7 @@ sub parse_hotplug_features {
 $data = $confdesc->{hotplug}->{default} if $data eq '1';
 
 foreach my $feature (PVE::Tools::split_list($data)) {
-   if ($feature =~ m/^(network|disk|cpu|memory|usb)$/) {
+   if ($feature =~ m/^(network|disk|cpu|memory|usb|cloudinit)$/) {
$res->{$1} = 1;
} else {
die "invalid hotplug feature '$feature'\n";
@@ -4675,8 +4675,17 @@ sub vmconfig_hotplug_pending {
delete $conf->{pending}->{$opt};
}
 }
-
 PVE::QemuConfig->write_config($vmid, $conf);
+
+if($hotplug_features->{cloudinit}) {
+   my $pending = PVE::QemuServer::Cloudinit::get_pending_config($conf, 
$vmid);
+   my $regenerate = undef;
+   foreach my $opt (keys %{$pending}) {
+   my $item = $pending->{$opt};
+   $regenerate = 1 if defined($item->{delete}) or 
defined($item->{pending});
+   }
+   PVE::QemuServer::vmconfig_update_cloudinit_drive($storecfg, $conf, 
$vmid) if $regenerate;
+}
 }
 
 sub try_deallocate_drive {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 qemu-server 4/7] api2: add cloudinit config api

2021-06-09 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/API2/Qemu.pm| 73 +
 PVE/CLI/qm.pm   |  1 +
 PVE/QemuServer/Cloudinit.pm | 70 +++
 3 files changed, 144 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 24dba86..cc58744 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -21,6 +21,7 @@ use PVE::ReplicationConfig;
 use PVE::GuestHelpers;
 use PVE::QemuConfig;
 use PVE::QemuServer;
+use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -1057,6 +1058,78 @@ __PACKAGE__->register_method({
return PVE::GuestHelpers::config_with_pending_array($conf, 
$pending_delete_hash);
}});
 
+__PACKAGE__->register_method({
+name => 'cloudinit_pending',
+path => '{vmid}/cloudinit',
+method => 'GET',
+proxyto => 'node',
+description => "Get the cloudinit configuration with both current and 
pending values.",
+permissions => {
+   check => ['perm', '/vms/{vmid}', [ 'VM.Audit' ]],
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   vmid => get_standard_option('pve-vmid', { completion => 
\&PVE::QemuServer::complete_vmid }),
+   },
+},
+returns => {
+   type => "array",
+   items => {
+   type => "object",
+   properties => {
+   key => {
+   description => "Configuration option name.",
+   type => 'string',
+   },
+   value => {
+   description => "Current value.",
+   type => 'string',
+   optional => 1,
+   },
+   pending => {
+   description => "Pending value.",
+   type => 'string',
+   optional => 1,
+   },
+   delete => {
+   description => "Indicates a pending delete request if 
present and not 0. " .
+  "The value 2 indicates a force-delete 
request.",
+   type => 'integer',
+   minimum => 0,
+   maximum => 2,
+   optional => 1,
+   },
+   },
+   },
+},
+code => sub {
+   my ($param) = @_;
+
+   my $vmid = $param->{vmid};
+   my $conf = PVE::QemuConfig->load_config($vmid);
+
+   if( defined($conf->{cipassword}) && 
+   defined($conf->{cloudinit}->{cipassword}) && 
+   $conf->{cipassword} ne $conf->{cloudinit}->{cipassword}) {
+   $conf->{cipassword} = '** ';
+   } elsif (defined($conf->{cipassword})) {
+   $conf->{cipassword} = '**';
+   }
+
+   $conf->{cloudinit}->{cipassword} = '**' if 
defined($conf->{cloudinit}->{cipassword});
+
+   my $res = [];
+   my $pending = PVE::QemuServer::Cloudinit::get_pending_config($conf, 
$vmid);
+
+   foreach my $opt (keys %{$pending}) {
+   push @$res, $pending->{$opt};
+   }
+
+   return $res;
+   }});
+
 # POST/PUT {vmid}/config implementation
 #
 # The original API used PUT (idempotent) an we assumed that all operations
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 1c199b6..d16bf2c 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -994,6 +994,7 @@ our $cmddef = {
my $data = shift;
print "$data\n";
}],
+   pending => [ "PVE::API2::Qemu", 'cloudinit_pending', ['vmid'], { node 
=> $nodename }, \&PVE::GuestHelpers::format_pending ]
 },
 
 };
diff --git a/PVE/QemuServer/Cloudinit.pm b/PVE/QemuServer/Cloudinit.pm
index abc62b7..156e073 100644
--- a/PVE/QemuServer/Cloudinit.pm
+++ b/PVE/QemuServer/Cloudinit.pm
@@ -607,4 +607,74 @@ sub dump_cloudinit_config {
 }
 }
 
+sub get_pending_config {
+my ($conf, $vmid) = @_;
+
+my $newconf = { %{$conf} };
+my $cloudinit_current = $newconf->{cloudinit};
+my @cloudinit_opts = keys 
%{PVE::QemuServer::cloudinit_config_properties()};
+push @cloudinit_opts, 'name';
+
+#add cloud-init drive 
+my $drives = {};
+PVE::QemuConfig->foreach_volume($newconf, sub {
+   my ($ds, $drive) = @_;
+   $drives->{$ds} = 1 if PVE::QemuServer::drive_is_cloudinit($drive);
+});
+
+PVE::QemuConfig->foreach_volume($cloudinit_current, sub {
+   my ($ds, $drive) = @_;
+   $drives->{$ds} = 1 if PVE::QemuServer::drive_is_cloudinit($drive);
+});
+foreach my $ds (keys %{$drives}) {
+   push @cloudinit_opts, $ds;
+}
+
+$newconf->{name} = "VM$vmid" if !$newconf->{name};
+
+my $print_net_addr = sub {
+   my ($conf, $opt, $netid) = @_;
+
+   if (defined($conf->{$netid})) {
+
+   my $net = PVE::QemuServer::parse_net($conf->{$netid});
+   if (defined($conf->{$opt})) {
+   $conf->{$opt} .= ",m

[pve-devel] [PATCH v3 qemu-server 1/7] cloudinit: add vm config to cloudinit drive

2021-06-09 Thread Alexandre Derumier
To have current running cloudinit config.

An improvement could be to implement parsing of config drive format,
and also compare cicustom snippets content

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer/Cloudinit.pm | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/PVE/QemuServer/Cloudinit.pm b/PVE/QemuServer/Cloudinit.pm
index a5474d3..abc62b7 100644
--- a/PVE/QemuServer/Cloudinit.pm
+++ b/PVE/QemuServer/Cloudinit.pm
@@ -19,6 +19,7 @@ sub commit_cloudinit_disk {
 
 my $path = "/run/pve/cloudinit/$vmid/";
 mkpath $path;
+
 foreach my $filepath (keys %$files) {
if ($filepath !~ m@^(.*)\/[^/]+$@) {
die "internal error: bad file name in cloud-init image: 
$filepath\n";
@@ -30,6 +31,15 @@ sub commit_cloudinit_disk {
file_set_contents("$path/$filepath", $contents);
 }
 
+mkpath "$path/proxmox";
+my $confcontent = "";
+foreach my $opt (keys %$conf) {
+   next if $opt =~ m/^(pending|snapshots|digest)$/;
+   $confcontent .= "$opt: $conf->{$opt}\n";
+}
+
+file_set_contents("$path/proxmox/vmconf", $confcontent);
+
 my $storecfg = PVE::Storage::config();
 my $iso_path = PVE::Storage::path($storecfg, $drive->{file});
 my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 qemu-server 0/7] cloudinit pending behaviour change

2021-06-09 Thread Alexandre Derumier
Hi,

This is an attempt to cleanup current behaviour of cloudinit online changes.

Currently, we setup cloudinit options as pending, until we generate the config 
drive.

This is not 100% true, because some option like vm name, nic mac address can be 
changed,
without going to pending, so user can't known if it need to regenerated it.

Also custom config can be done with snippets file, without any pending state.

Also, some can are very difficult to handle, if you hotplug a nic but it's 
failing,so pending,
then you defined an ipconfig, and then you revert hotplug.

(This will be really usefull with ipam implementation, where ipconfig pending 
state is really
 needed, as we need to follow the pending state of the netX interface)

So, instead of setting cloudinit values in pending,
this patch serie extract the current config from the cloudinit drive and 
compare it to vm config (pending config).

(Currently the vm config is simply copied inside the iso at generation, but we 
could implemented
 configdrive format parsers) 

A new specific cloudinit config api is added too, merging ipaddrX && netX mac
in same field, and displaying the diff between current and generated config.
(we could implemented read config from custom snippet too later)


Changelog V1:

- use [special:cloudinit] instead [CLOUDINIT] for section
- delete config section on drive removal
- config api: move code to new PVE::QemuServer::Cloudinit::get_pending_config
- config api: add "qm cloudinit pending" cli
- add update api to regenerate drive with 1 api call
- add cloudinit hotplug option

Changelog v2:

- fix trailing whitespace in first patch
- revert previous "cloudinit" check in snapshot name (":" character is already 
forbidden)

Changelog v3:

- extract the current conf from cloudinit drive instead write the special 
cloudinit section


Alexandre Derumier (7):
  cloudinit: add vm config to cloudinit drive
  cloudinit: generate cloudinit drive on offline plug
  cloudinit: make cloudnit options fastplug
  api2: add cloudinit config api
  cloudinit : add extract_cloudinit_config
  api2: add cloudinit_update
  add cloudinit hotplug

 PVE/API2/Qemu.pm| 118 ++
 PVE/CLI/qm.pm   |   2 +
 PVE/QemuServer.pm   |  81 ++-
 PVE/QemuServer/Cloudinit.pm | 125 
 4 files changed, 296 insertions(+), 30 deletions(-)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 qemu-server 6/7] api2: add cloudinit_update

2021-06-09 Thread Alexandre Derumier
This allow to regenerate the config drive with 1 api call.

This also avoid to delete drive volume first, and recreate it again.

we can simply:
- eject
- regenerated the volume
- replace it with qemu monitor

Signed-off-by: Alexandre Derumier 
---
 PVE/API2/Qemu.pm  | 44 
 PVE/CLI/qm.pm |  3 ++-
 PVE/QemuServer.pm | 26 ++
 3 files changed, 72 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 8ac3ae3..7843dcb 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1131,6 +1131,50 @@ __PACKAGE__->register_method({
return $res;
}});
 
+__PACKAGE__->register_method({
+name => 'cloudinit_update',
+path => '{vmid}/cloudinit',
+method => 'PUT',
+protected => 1,
+proxyto => 'node',
+description => "Regenerate and change cloudinit config drive.",
+permissions => {
+   check => ['perm', '/vms/{vmid}', 'VM.Config.Cloudinit', any => 1],
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   vmid => get_standard_option('pve-vmid'),
+   },
+},
+returns => { type => 'null' },
+code => sub {
+   my ($param) = @_;
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+
+   my $authuser = $rpcenv->get_user();
+
+   my $vmid = extract_param($param, 'vmid');
+
+   my $updatefn =  sub {
+
+   my $conf = PVE::QemuConfig->load_config($vmid);
+
+   PVE::QemuConfig->check_lock($conf);
+
+   my $storecfg = PVE::Storage::config();
+
+   PVE::QemuServer::vmconfig_update_cloudinit_drive($storecfg, $conf, 
$vmid);
+
+   };
+
+   PVE::QemuConfig->lock_config($vmid, $updatefn);
+
+   return;
+}});
+
 # POST/PUT {vmid}/config implementation
 #
 # The original API used PUT (idempotent) an we assumed that all operations
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index d16bf2c..f7c97ed 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -994,7 +994,8 @@ our $cmddef = {
my $data = shift;
print "$data\n";
}],
-   pending => [ "PVE::API2::Qemu", 'cloudinit_pending', ['vmid'], { node 
=> $nodename }, \&PVE::GuestHelpers::format_pending ]
+   pending => [ "PVE::API2::Qemu", 'cloudinit_pending', ['vmid'], { node 
=> $nodename }, \&PVE::GuestHelpers::format_pending ],
+   update => [ "PVE::API2::Qemu", 'cloudinit_update', ['vmid'], { node => 
$nodename }],
 },
 
 };
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0be4c45..ff5d473 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4941,6 +4941,32 @@ sub vmconfig_update_disk {
 vm_deviceplug($storecfg, $conf, $vmid, $opt, $drive, $arch, $machine_type);
 }
 
+sub vmconfig_update_cloudinit_drive {
+my ($storecfg, $conf, $vmid) = @_;
+
+my $cloudinit_ds = undef;
+my $cloudinit_drive = undef;
+
+PVE::QemuConfig->foreach_volume($conf, sub {
+   my ($ds, $drive) = @_;
+   if (PVE::QemuServer::drive_is_cloudinit($drive)) {
+   $cloudinit_ds = $ds;
+   $cloudinit_drive = $drive;
+   }
+});
+
+return if !$cloudinit_drive;
+
+my $running = PVE::QemuServer::check_running($vmid);
+my $path = PVE::Storage::path($storecfg, $cloudinit_drive->{file});
+
+mon_cmd($vmid, "eject", force => JSON::true, id => "$cloudinit_ds") if 
$running;
+
+PVE::QemuServer::Cloudinit::generate_cloudinitconfig($conf, $vmid);
+
+mon_cmd($vmid, "blockdev-change-medium", id => "$cloudinit_ds", filename 
=> "$path") if $running;
+}
+
 # called in locked context by incoming migration
 sub vm_migrate_get_nbd_disks {
 my ($storecfg, $conf, $replicated_volumes) = @_;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 pve-manager 1/2] ui: cloudinit : use new cloudinit config api

2021-06-09 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 www/manager6/qemu/CloudInit.js | 85 +++---
 1 file changed, 37 insertions(+), 48 deletions(-)

diff --git a/www/manager6/qemu/CloudInit.js b/www/manager6/qemu/CloudInit.js
index 77ff93d4..573d2945 100644
--- a/www/manager6/qemu/CloudInit.js
+++ b/www/manager6/qemu/CloudInit.js
@@ -82,38 +82,12 @@ Ext.define('PVE.qemu.CloudInit', {
text: gettext('Regenerate Image'),
handler: function() {
let view = this.up('grid');
-   var eject_params = {};
-   var insert_params = {};
-   let disk = PVE.Parser.parseQemuDrive(view.ciDriveId, 
view.ciDrive);
-   var storage = '';
-   var stormatch = disk.file.match(/^([^:]+):/);
-   if (stormatch) {
-   storage = stormatch[1];
-   }
-   eject_params[view.ciDriveId] = 'none,media=cdrom';
-   insert_params[view.ciDriveId] = storage + ':cloudinit';
-
-   var failure = function(response, opts) {
-   Ext.Msg.alert('Error', response.htmlStatus);
-   };
-
Proxmox.Utils.API2Request({
-   url: view.baseurl + '/config',
+   url: view.baseurl + '/cloudinit',
waitMsgTarget: view,
method: 'PUT',
-   params: eject_params,
-   failure: failure,
callback: function() {
-   Proxmox.Utils.API2Request({
-   url: view.baseurl + '/config',
-   waitMsgTarget: view,
-   method: 'PUT',
-   params: insert_params,
-   failure: failure,
-   callback: function() {
-   view.reload();
-   },
-   });
+   view.reload();
},
});
},
@@ -133,13 +107,20 @@ Ext.define('PVE.qemu.CloudInit', {
return;
}
var id = record.data.key;
+   var pending = record.data.pending;
var value = record.data.value;
var ciregex = new RegExp("vm-" + me.pveSelNode.data.vmid + 
"-cloudinit");
-   if (id.match(/^(ide|scsi|sata)\d+$/) && ciregex.test(value)) {
+   if (id.match(/^(ide|scsi|sata)\d+$/)) {
+   if (ciregex.test(pending)) {
+   found = id;
+   me.ciDriveId = found;
+   me.ciDrive = pending;
+   } else if (ciregex.test(value)) {
found = id;
me.ciDriveId = found;
me.ciDrive = value;
}
+   }
});
 
me.down('#savebtn').setDisabled(!found);
@@ -168,6 +149,10 @@ Ext.define('PVE.qemu.CloudInit', {
var me = this;
me.rstore.startUpdate();
},
+   destroy: function() {
+   var me = this;
+   me.rstore.stopUpdate();
+   },
itemdblclick: function() {
var me = this;
me.run_editor();
@@ -188,13 +173,21 @@ Ext.define('PVE.qemu.CloudInit', {
}
var caps = Ext.state.Manager.get('GuiCap');
me.baseurl = '/api2/extjs/nodes/' + nodename + '/qemu/' + vmid;
-   me.url = me.baseurl + '/pending';
+   me.url = me.baseurl + '/cloudinit';
me.editorConfig.url = me.baseurl + '/config';
me.editorConfig.pveSelNode = me.pveSelNode;
 
let caps_ci = caps.vms['VM.Config.Cloudinit'] || 
caps.vms['VM.Config.Network'];
/* editor is string and object */
+
me.rows = {
+   cicustom: {
+   header: gettext('Custom Config'),
+   iconCls: 'fa fa-cogs',
+   editor: undefined,
+   never_delete: true,
+   defaultValue: '',
+   },
ciuser: {
header: gettext('User'),
iconCls: 'fa fa-user',
@@ -239,6 +232,13 @@ Ext.define('PVE.qemu.CloudInit', {
return value || Proxmox.Utils.noneText;
},
},
+   name: {
+   header: gettext('Hostname'),
+   iconCls: 'fa fa-globe',
+   editor: undefined,
+   never_delete: true,
+   defaultValue: '',
+   },
searchdomain: {
header: gettext('DNS domain'),
iconCls: 'fa fa-globe',
@@ -288,35 +288,24 @@ Ext.define('PVE.qemu.CloudInit', {
},
};
var i;
-   var ipconfig_renderer = function(value, md, record, ri, ci, store, 
pending) {
-   var id = record.data.key;
-   var match = id.match(/^net(\d+)$/);
-   var val = '';
-   if (match) {
-   val = me.getObjectValue('ipconfig'+match[1],

[pve-devel] [PATCH v2 pve-manager 0/2] cloudinit pending behaviour change

2021-06-09 Thread Alexandre Derumier
Implement new cloudinit api from last qemu-server patch serie.

I don't have tuned it yet, the rstore is polling the api each second,
 so extract the config from cloudinit drive each time.
I need to check how to load it once or when regenerated only.


Changelog v2:

- rebase to last master

Alexandre Derumier (2):
  ui: cloudinit : use new cloudinit config api
  ui: add cloudinit hotplug option

 www/manager6/Utils.js   |  2 +
 www/manager6/form/HotplugFeatureSelector.js |  4 +
 www/manager6/qemu/CloudInit.js  | 85 +
 3 files changed, 43 insertions(+), 48 deletions(-)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 pve-manager 2/2] ui: add cloudinit hotplug option

2021-06-09 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 www/manager6/Utils.js   | 2 ++
 www/manager6/form/HotplugFeatureSelector.js | 4 
 2 files changed, 6 insertions(+)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index d9567979..23babb8b 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -444,6 +444,8 @@ Ext.define('PVE.Utils', {
fa.push(gettext('Memory'));
} else if (el === 'cpu') {
fa.push(gettext('CPU'));
+   } else if (el === 'cloudinit') {
+   fa.push(gettext('Cloudinit'));
} else {
fa.push(el);
}
diff --git a/www/manager6/form/HotplugFeatureSelector.js 
b/www/manager6/form/HotplugFeatureSelector.js
index cb9fc552..ffb19918 100644
--- a/www/manager6/form/HotplugFeatureSelector.js
+++ b/www/manager6/form/HotplugFeatureSelector.js
@@ -33,6 +33,10 @@ Ext.define('PVE.form.HotplugFeatureSelector', {
boxLabel: gettext('CPU'),
inputValue: 'cpu',
},
+   {
+   boxLabel: 'Cloudinit',
+   inputValue: 'cloudinit',
+   },
 ],
 
 setValue: function(value) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH multiple] btrfs, file system for the brave

2021-06-09 Thread Wolfgang Bumiller
This is another take at btrfs storage support.
I wouldn't exactly call it great, but I guess it works (although I did
manage to break a few... Then again I also manged to do that with ZFS
(it just took a few years longer there...)).

This one's spread over quite a few repositories, so let's go through
them in apply-order:

* pve-common:

  One nice improvement since the last go around is that by now btrfs
  supports renameat2's `RENAME_EXCHANGE` flag.

  * PATCH 1/1: Syscalls/Tools: add renameat2

The idea here is to have a more robust "rollback" implementation,
since "snapshots" in btrfs are really just losely connected
subvolumes, and there is no direct rollback functionality.  Instead,
we simply clone the snapshot we want to roll back to (by making a
writable snapshot), and then rotate the clone into place before
cleaning up the now-old version.
Without `RENAME_EXCHANGE` this rotation required 2 syscalls
creating a small window where, if the process is stopped/killed,
the volume we're working on would not live in its designated place,
making it somewhat nasty to deal with. Now, the worst that happens
is an extra left-over snapshot lying around.

* pve-storage:

  * PATCH 1/4: fix find_free_disk_name invocations

Just a non-issue I ran into (the parameter isn't actually used by
our implementors currently, but it confused me ;-) ).

  * PATCH 2/4: add BTRFS storage plugin

The main implementation with btrfs send/recv saved up for patch 4.
(There's a note about `mkdir` vs `populate` etc., I intend to clean
this up later, we had some off-list discussion about this
already...)

Currently, container subvolumes are only allowed to be unsized
(size zero, like with our plain directory storage subvols), though
we *could* enable quota support with little effort, but quota
information is lost in send/recv operations, so we need to cover
this in our import/export format separately, if we want to.
(Although I have a feeling it wouldn't be nice for performance
anyway...)

  * PATCH 3/4: update import/export storage API

_Technically_ I *could* do without, but it would be quite
inconvenient, and the information it adds to the methods is usually
readily available, so I think this makes sense.

  * PATCH 4/4: btrfs: add 'btrfs' import/export format

This requires a bit more elbow grease than ZFS, though, so I split
this out into a separate patch.

* pve-container:

  * PATCH 1/2: migration: fix snapshots boolean accounting

(The `with_snapshots` parameter is otherways not set correctly
since we handle the base volume last)

  * PATCH 2/2: enable btrfs support via subvolumes

Some of this stuff should probably become a storage property...
For container volumes which aren't _unsized_ this still allocates
an ext4 formatted raw image. For size=0 volumes we'll have an
actual btrfs subvolume.

* qemu-server:

  * PATCH 1/1: allow migrating raw btrfs volumes

Like in pve-container, some of this stuff should probably become a
storage property...

--
Big Terrifying Risky File System


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH common] Syscalls/Tools: add renameat2

2021-06-09 Thread Wolfgang Bumiller
Mostly for the ability to atomically swap files.

Signed-off-by: Wolfgang Bumiller 
---
 src/PVE/Syscall.pm |  1 +
 src/PVE/Tools.pm   | 10 ++
 2 files changed, 11 insertions(+)

diff --git a/src/PVE/Syscall.pm b/src/PVE/Syscall.pm
index 2d5019f..10e185d 100644
--- a/src/PVE/Syscall.pm
+++ b/src/PVE/Syscall.pm
@@ -17,6 +17,7 @@ BEGIN {
setresuid => &SYS_setresuid,
fchownat => &SYS_fchownat,
mount => &SYS_mount,
+   renameat2 => &SYS_renameat2,
 
# These use asm-generic, so they're the same across (sane) 
architectures. We use numbers
# since they're not in perl's syscall.ph yet...
diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
index 16ae3d2..63bf075 100644
--- a/src/PVE/Tools.pm
+++ b/src/PVE/Tools.pm
@@ -104,6 +104,11 @@ use constant {O_PATH=> 0x0020,
 use constant {AT_EMPTY_PATH => 0x1000,
   AT_FDCWD => -100};
 
+# from 
+use constant {RENAME_NOREPLACE => (1 << 0),
+  RENAME_EXCHANGE  => (1 << 1),
+  RENAME_WHITEOUT  => (1 << 2)};
+
 sub run_with_timeout {
 my ($timeout, $code, @param) = @_;
 
@@ -1461,6 +1466,11 @@ sub fsync($) {
 return 0 == syscall(PVE::Syscall::fsync, $fileno);
 }
 
+sub renameat2($) {
+my ($olddirfd, $oldpath, $newdirfd, $newpath, $flags) = @_;
+return 0 == syscall(PVE::Syscall::renameat2, $olddirfd, $oldpath, 
$newdirfd, $newpath, $flags);
+}
+
 sub sync_mountpoint {
 my ($path) = @_;
 sysopen my $fd, $path, O_RDONLY|O_CLOEXEC or die "failed to open $path: 
$!\n";
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH container 1/2] migration: fix snapshots boolean accounting

2021-06-09 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
 src/PVE/LXC/Migrate.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index b5917e9..4370a3d 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -144,7 +144,7 @@ sub phase1 {
}
 
$volhash->{$volid}->{ref} = defined($snapname) ? 'snapshot' : 'config';
-   $volhash->{$volid}->{snapshots} = defined($snapname);
+   $volhash->{$volid}->{snapshots} = 1 if defined($snapname);
 
my ($path, $owner) = PVE::Storage::path($self->{storecfg}, $volid);
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage 1/4] fix find_free_disk_name invocations

2021-06-09 Thread Wolfgang Bumiller
The interface takes the storeid now, not the image dir.

Signed-off-by: Wolfgang Bumiller 
---
 PVE/Storage/GlusterfsPlugin.pm | 4 ++--
 PVE/Storage/Plugin.pm  | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
index 5ec2f42..599bca2 100644
--- a/PVE/Storage/GlusterfsPlugin.pm
+++ b/PVE/Storage/GlusterfsPlugin.pm
@@ -215,7 +215,7 @@ sub clone_image {
 
 mkpath $imagedir;
 
-my $name = $class->find_free_diskname($imagedir, $scfg, $vmid, "qcow2", 1);
+my $name = $class->find_free_diskname($storeid, $scfg, $vmid, "qcow2", 1);
 
 warn "clone $volname: $vtype, $name, $vmid to $name 
(base=../$basevmid/$basename)\n";
 
@@ -243,7 +243,7 @@ sub alloc_image {
 
 mkpath $imagedir;
 
-$name = $class->find_free_diskname($imagedir, $scfg, $vmid, $fmt, 1) if 
!$name;
+$name = $class->find_free_diskname($storeid, $scfg, $vmid, $fmt, 1) if 
!$name;
 
 my (undef, $tmpfmt) = parse_name_dir($name);
 
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 4a10a1f..318d13a 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -695,7 +695,7 @@ sub clone_image {
 
 mkpath $imagedir;
 
-my $name = $class->find_free_diskname($imagedir, $scfg, $vmid, "qcow2", 1);
+my $name = $class->find_free_diskname($storeid, $scfg, $vmid, "qcow2", 1);
 
 warn "clone $volname: $vtype, $name, $vmid to $name 
(base=../$basevmid/$basename)\n";
 
@@ -727,7 +727,7 @@ sub alloc_image {
 
 mkpath $imagedir;
 
-$name = $class->find_free_diskname($imagedir, $scfg, $vmid, $fmt, 1) if 
!$name;
+$name = $class->find_free_diskname($storeid, $scfg, $vmid, $fmt, 1) if 
!$name;
 
 my (undef, $tmpfmt) = parse_name_dir($name);
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH container 2/2] enable btrfs support via subvolumes

2021-06-09 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
 src/PVE/LXC.pm | 4 +++-
 src/PVE/LXC/Migrate.pm | 7 ---
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index bb1cbdb..9407d24 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1871,7 +1871,9 @@ sub alloc_disk {
 
 eval {
my $do_format = 0;
-   if ($scfg->{type} eq 'dir' || $scfg->{type} eq 'nfs' || $scfg->{type} 
eq 'cifs' ) {
+   if ($scfg->{type} eq 'dir' || $scfg->{type} eq 'nfs' || $scfg->{type} 
eq 'cifs'
+   || $scfg->{type} eq 'btrfs'
+   ) {
if ($size_kb > 0) {
$volid = PVE::Storage::vdisk_alloc($storecfg, $storage, $vmid, 
'raw',
   undef, $size_kb);
diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index 4370a3d..15c6027 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -154,7 +154,7 @@ sub phase1 {
if (defined($snapname)) {
# we cannot migrate shapshots on local storage
# exceptions: 'zfspool'
-   if (($scfg->{type} eq 'zfspool')) {
+   if ($scfg->{type} eq 'zfspool' || $scfg->{type} eq 'btrfs') {
return;
}
die "non-migratable snapshot exists\n";
@@ -218,8 +218,9 @@ sub phase1 {
my ($sid, $volname) = PVE::Storage::parse_volume_id($volid);
my $scfg =  PVE::Storage::storage_config($self->{storecfg}, $sid);
 
-   my $migratable = ($scfg->{type} eq 'dir') || ($scfg->{type} eq 
'zfspool') ||
-   ($scfg->{type} eq 'lvmthin') || ($scfg->{type} eq 'lvm');
+   my $migratable = ($scfg->{type} eq 'dir') || ($scfg->{type} eq 
'zfspool')
+   || ($scfg->{type} eq 'lvmthin') || ($scfg->{type} eq 'lvm')
+   || ($scfg->{type} eq 'btrfs');
 
die "storage type '$scfg->{type}' not supported\n"
if !$migratable;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server] allow migrating raw btrfs volumes

2021-06-09 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
 PVE/QemuMigrate.pm | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 6375a15..5b07c20 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -502,7 +502,10 @@ sub scan_local_volumes {
# exceptions: 'zfspool' or 'qcow2' files (on directory storage)
 
die "online storage migration not possible if snapshot 
exists\n" if $self->{running};
-   if (!($scfg->{type} eq 'zfspool' || 
$local_volumes->{$volid}->{format} eq 'qcow2')) {
+   if (!($scfg->{type} eq 'zfspool'
+   || ($scfg->{type} eq 'btrfs' && 
$local_volumes->{$volid}->{format} eq 'raw')
+   || $local_volumes->{$volid}->{format} eq 'qcow2'
+   )) {
die "non-migratable snapshot exists\n";
}
}
@@ -553,7 +556,7 @@ sub scan_local_volumes {
my ($sid, $volname) = PVE::Storage::parse_volume_id($volid);
my $scfg =  PVE::Storage::storage_config($storecfg, $sid);
 
-   my $migratable = $scfg->{type} =~ /^(?:dir|zfspool|lvmthin|lvm)$/;
+   my $migratable = $scfg->{type} =~ 
/^(?:dir|btrfs|zfspool|lvmthin|lvm)$/;
 
die "can't migrate '$volid' - storage type '$scfg->{type}' not 
supported\n"
if !$migratable;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage 4/4] btrfs: add 'btrfs' import/export format

2021-06-09 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
 PVE/CLI/pvesm.pm   |   2 +-
 PVE/Storage.pm |   2 +-
 PVE/Storage/BTRFSPlugin.pm | 248 +++--
 3 files changed, 240 insertions(+), 12 deletions(-)

diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
index b22f759..ef2f5e6 100755
--- a/PVE/CLI/pvesm.pm
+++ b/PVE/CLI/pvesm.pm
@@ -30,7 +30,7 @@ use PVE::CLIHandler;
 
 use base qw(PVE::CLIHandler);
 
-my $KNOWN_EXPORT_FORMATS = ['raw+size', 'tar+size', 'qcow2+size', 'vmdk+size', 
'zfs'];
+my $KNOWN_EXPORT_FORMATS = ['raw+size', 'tar+size', 'qcow2+size', 'vmdk+size', 
'zfs', 'btrfs'];
 
 my $nodename = PVE::INotify::nodename();
 
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 15fcedf..6f5a033 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -692,7 +692,7 @@ sub storage_migrate {
 
 my $migration_snapshot;
 if (!defined($snapshot)) {
-   if ($scfg->{type} eq 'zfspool') {
+   if ($scfg->{type} eq 'zfspool' || $scfg->{type} eq 'btrfs') {
$migration_snapshot = 1;
$snapshot = '__migration__';
}
diff --git a/PVE/Storage/BTRFSPlugin.pm b/PVE/Storage/BTRFSPlugin.pm
index 733be6e..87e48c7 100644
--- a/PVE/Storage/BTRFSPlugin.pm
+++ b/PVE/Storage/BTRFSPlugin.pm
@@ -9,8 +9,9 @@ use Fcntl qw(S_ISDIR);
 use File::Basename qw(dirname);
 use File::Path qw(mkpath);
 use IO::Dir;
+use POSIX qw(EEXIST);
 
-use PVE::Tools qw(run_command);
+use PVE::Tools qw(run_command dir_glob_foreach);
 
 use PVE::Storage::DirPlugin;
 
@@ -594,23 +595,250 @@ sub list_images {
 return $res;
 }
 
-# For now we don't implement `btrfs send/recv` as it needs some updates to our 
import/export API
-# first!
-
 sub volume_export_formats {
-return PVE::Storage::DirPlugin::volume_export_formats(@_);
-}
+my ($class, $scfg, $storeid, $volname, $snapshot, $base_snapshot, 
$with_snapshots) = @_;
 
-sub volume_export {
-return PVE::Storage::DirPlugin::volume_export(@_);
+# We can do whatever `DirPlugin` can do.
+my @result = PVE::Storage::Plugin::volume_export_formats(@_);
+
+# `btrfs send` only works on snapshots:
+return @result if !defined $snapshot;
+
+# Incremental stream with snapshots is only supported if the snapshots are 
listed (new api):
+return @result if defined($base_snapshot) && $with_snapshots && 
ref($with_snapshots) ne 'ARRAY';
+
+# Otherwise we do also support `with_snapshots`.
+
+# Finally, `btrfs send` only works on formats where we actually use btrfs 
subvolumes:
+my $format = ($class->parse_volname($volname))[6];
+return @result if $format ne 'raw' && $format ne 'subvol';
+
+return ('btrfs', @result);
 }
 
 sub volume_import_formats {
-return PVE::Storage::DirPlugin::volume_import_formats(@_);
+my ($class, $scfg, $storeid, $volname, $base_snapshot, $with_snapshots, 
$snapshot) = @_;
+
+# Same as export-formats, beware the parameter order:
+return volume_export_formats(
+   $class,
+   $scfg,
+   $storeid,
+   $volname,
+   $snapshot,
+   $base_snapshot,
+   $with_snapshots,
+);
+}
+
+sub volume_export {
+my (
+   $class,
+   $scfg,
+   $storeid,
+   $fh,
+   $volname,
+   $format,
+   $snapshot,
+   $base_snapshot,
+   $with_snapshots,
+) = @_;
+
+if ($format ne 'btrfs') {
+   return PVE::Storage::Plugin::volume_export(@_);
+}
+
+die "format 'btrfs' only works on snapshots\n"
+   if !defined $snapshot;
+
+die "'btrfs' format in incremental mode requires snapshots to be listed 
explicitly\n"
+   if defined($base_snapshot) && $with_snapshots && ref($with_snapshots) 
ne 'ARRAY';
+
+my $volume_format = ($class->parse_volname($volname))[6];
+
+die "btrfs-sending volumes of type $volume_format ('$volname') is not 
supported\n"
+   if $volume_format ne 'raw' && $volume_format ne 'subvol';
+
+my $path = $class->path($scfg, $volname, $storeid);
+
+if ($volume_format eq 'raw') {
+   $path = raw_file_to_subvol($path);
+}
+
+my $cmd = ['btrfs', '-q', 'send', '-e'];
+if ($base_snapshot) {
+   my $base = $class->path($scfg, $volname, $storeid, $base_snapshot);
+   if ($volume_format eq 'raw') {
+   $base = raw_file_to_subvol($base);
+   }
+   push @$cmd, '-p', $base;
+}
+push @$cmd, '--';
+if (ref($with_snapshots) eq 'ARRAY') {
+   push @$cmd, (map { "$path\@$_" } ($with_snapshots // [])->@*), $path;
+} else {
+   dir_glob_foreach(dirname($path), $BTRFS_VOL_REGEX, sub {
+   push @$cmd, "$path\@$_[2]" if !(defined($snapshot) && $_[2] eq 
$snapshot);
+   });
+}
+$path .= "\@$snapshot" if defined($snapshot);
+push @$cmd, $path;
+
+run_command($cmd, output => '>&'.fileno($fh));
+return;
 }
 
 sub volume_import {
-return PVE::Storage::DirPlugin::volume_import(@_);
+my (
+   $class,
+   $scfg,
+   $storeid,
+   $fh,
+   $volname,
+   $format,

[pve-devel] [PATCH storage 3/4] update import/export storage API

2021-06-09 Thread Wolfgang Bumiller
This bumps APIVER, but also APIAGE, see below.

The import methods (volume_import, volume_import_formats):
  This additionally gets the '$snapshot' parameter which is
  already present on the export side as an informational
  piece to know which of the snapshots is the *current* one.
  The current "disk" state will be set to this snapshot.
  This, too, is required for our btrfs implementation.
`volume_import_formats` can obviously not make much
  *use* of this parameter, but it'll still be useful to know
  that the information is actually available in the import
  call, so its presence will be checked in the btrfs
  implementation.

Currently this is intended to be used for btrfs send/recv
support, which in theory could also get additional metadata
similar to how we do the "tar+size" format, however, we
currently only really use this within this repository in
storage_migrate() which has this information readily
available anyway.

On the export side (volume_export, volume_export_formats):
  The `$with_snapshots` option can now also be an ordered
  array of snapshots to include, as a hint for storages
  which need this. (As of the next commit this is only
  btrfs, and only when also specifying a base snapshot,
  which is a case we can currently not run into except on
  the command line interface.)
The current providers of the `with_snapshot` option will
  still treat it as a boolean (since eg. for ZFS you cannot
  really "skip" snapshots AFAIK).
This is mainly intended for storages which do not have a
  strong association between snapshots and the originals, or
  an ordering (eg. btrfs and lvm-thin allow creating
  arbitrary snapshot trees, and with btrfs you can even
  create a "circular" connection between subvolumes, also we
  could consider reflink based copies snapshots on xfs in
  the future maybe?)

While `import_formats` and `export_formats` now take the same
parameters, I appended the `$snapshot` parameter to the end of
`import_formats` to the end for compatibility.
Therefore we also bump APIAGE, as apart from the btrfs
storage, none of our other storages need the new parameter.

Signed-off-by: Wolfgang Bumiller 
---
 PVE/CLI/pvesm.pm | 23 +--
 PVE/Storage.pm   | 48 
 2 files changed, 53 insertions(+), 18 deletions(-)

diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
index d28f1ba..b22f759 100755
--- a/PVE/CLI/pvesm.pm
+++ b/PVE/CLI/pvesm.pm
@@ -276,12 +276,23 @@ __PACKAGE__->register_method ({
optional => 1,
default => 0,
},
+   'snapshot-list' => {
+   description => "Ordered list of snapshots to transfer",
+   type => 'string',
+   format => 'string-list',
+   optional => 1,
+   },
},
 },
 returns => { type => 'null' },
 code => sub {
my ($param) = @_;
 
+   my $with_snapshots = $param->{'with-snapshots'};
+   if (defined(my $list = $param->{'snapshot-list'})) {
+   $with_snapshots = PVE::Tools::split_list($list);
+   }
+
my $filename = $param->{filename};
 
my $outfh;
@@ -295,7 +306,7 @@ __PACKAGE__->register_method ({
eval {
my $cfg = PVE::Storage::config();
PVE::Storage::volume_export($cfg, $outfh, $param->{volume}, 
$param->{format},
-   $param->{snapshot}, $param->{base}, $param->{'with-snapshots'});
+   $param->{snapshot}, $param->{base}, $with_snapshots);
};
my $err = $@;
if ($filename ne '-') {
@@ -361,6 +372,13 @@ __PACKAGE__->register_method ({
optional => 1,
default => 0,
},
+   snapshot => {
+   description => "The current-state snapshot if the stream 
contains snapshots",
+   type => 'string',
+   pattern => qr/[a-z0-9_\-]{1,40}/i,
+   maxLength => 40,
+   optional => 1,
+   },
},
 },
 returns => { type => 'string' },
@@ -436,7 +454,8 @@ __PACKAGE__->register_method ({
my $volume = $param->{volume};
my $delete = $param->{'delete-snapshot'};
my $imported_volid = PVE::Storage::volume_import($cfg, $infh, $volume, 
$param->{format},
-   $param->{base}, $param->{'with-snapshots'}, 
$param->{'allow-rename'});
+   $param->{base}, $param->{'with-snapshots'}, 
$param->{'allow-rename'},
+   $param->{snapshot});
PVE::Storage::volume_snapshot_delete($cfg, $imported_volid, $delete)
if defined($delete);
return $imported_volid;
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index f9f8d16..15fcedf 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -41,11 +41,11 @@ use PVE::Storage::PBSPlugin;
 use PVE::Storage::BTRFSPlugin;
 
 # Storage API version. Increment it on changes in storage API interface.
-use constant APIVER => 8;
+use constant APIVER => 9;
 # Age is the n

[pve-devel] [PATCH storage 2/4] add BTRFS storage plugin

2021-06-09 Thread Wolfgang Bumiller
This is mostly the same as a directory storage, with 2 major
differences:

* 'subvol' volumes are actual btrfs subvolumes and therefore
  allow snapshots
* 'raw' files are placed *into* a subvolume and therefore
  also allow snapshots, the raw file for volume
  `btrstore:100/vm-100-disk-1.raw` can be found under
  `$path/images/100/vm-100-disk-1/disk.raw`
* snapshots add an '@name' suffix to the subvolume's name,
  so snapshot 'foo' of the above would be found under
  `$path/images/100/vm-100-disk-1@foo/disk.raw`

Note that qgroups aren't included in btrfs-send streams,
therefore for now we will only be using *unsized* subvolumes
for containers and place a regular raw+ext4 file for sized
containers.
We could extend the import/export stream format to include
the information at the front (similar to how we do the
"tar+size" format, but we need to include the size of all
the contained snapshots as well, since they can technically
change). (But before enabling quotas we should do some
performance testing on bigger file systems with multiple
snapshots as there are quite a few reports of the fs slowing
down considerably in such scenarios).

Signed-off-by: Wolfgang Bumiller 
---
 PVE/Storage.pm |   2 +
 PVE/Storage/BTRFSPlugin.pm | 616 +
 PVE/Storage/Makefile   |   1 +
 3 files changed, 619 insertions(+)
 create mode 100644 PVE/Storage/BTRFSPlugin.pm

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index aa36bad..f9f8d16 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -38,6 +38,7 @@ use PVE::Storage::GlusterfsPlugin;
 use PVE::Storage::ZFSPoolPlugin;
 use PVE::Storage::ZFSPlugin;
 use PVE::Storage::PBSPlugin;
+use PVE::Storage::BTRFSPlugin;
 
 # Storage API version. Increment it on changes in storage API interface.
 use constant APIVER => 8;
@@ -60,6 +61,7 @@ PVE::Storage::GlusterfsPlugin->register();
 PVE::Storage::ZFSPoolPlugin->register();
 PVE::Storage::ZFSPlugin->register();
 PVE::Storage::PBSPlugin->register();
+PVE::Storage::BTRFSPlugin->register();
 
 # load third-party plugins
 if ( -d '/usr/share/perl5/PVE/Storage/Custom' ) {
diff --git a/PVE/Storage/BTRFSPlugin.pm b/PVE/Storage/BTRFSPlugin.pm
new file mode 100644
index 000..733be6e
--- /dev/null
+++ b/PVE/Storage/BTRFSPlugin.pm
@@ -0,0 +1,616 @@
+package PVE::Storage::BTRFSPlugin;
+
+use strict;
+use warnings;
+
+use base qw(PVE::Storage::Plugin);
+
+use Fcntl qw(S_ISDIR);
+use File::Basename qw(dirname);
+use File::Path qw(mkpath);
+use IO::Dir;
+
+use PVE::Tools qw(run_command);
+
+use PVE::Storage::DirPlugin;
+
+use constant {
+BTRFS_FIRST_FREE_OBJECTID => 256,
+};
+
+# Configuration (similar to DirPlugin)
+
+sub type {
+return 'btrfs';
+}
+
+sub plugindata {
+return {
+   content => [
+   {
+   images => 1,
+   rootdir => 1,
+   vztmpl => 1,
+   iso => 1,
+   backup => 1,
+   snippets => 1,
+   none => 1,
+   },
+   { images => 1, rootdir => 1 },
+   ],
+   format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 }, 'raw', ],
+};
+}
+
+sub options {
+return {
+   path => { fixed => 1 },
+   nodes => { optional => 1 },
+   shared => { optional => 1 },
+   disable => { optional => 1 },
+   maxfiles => { optional => 1 },
+   content => { optional => 1 },
+   format => { optional => 1 },
+   is_mountpoint => { optional => 1 },
+   # TODO: The new variant of mkdir with  `populate` vs `create`...
+};
+}
+
+# Storage implementation
+#
+# We use the same volume names are directory plugins, but map *raw* disk image 
file names into a
+# subdirectory.
+#
+# `vm-VMID-disk-ID.raw`
+#   -> `images/VMID/vm-VMID-disk-ID/disk.raw`
+#   where the `vm-VMID-disk-ID/` subdirectory is a btrfs subvolume
+
+# Reuse `DirPlugin`'s `check_config`. This simply checks for invalid paths.
+sub check_config {
+my ($self, $sectionId, $config, $create, $skipSchemaCheck) = @_;
+return PVE::Storage::DirPlugin::check_config($self, $sectionId, $config, 
$create, $skipSchemaCheck);
+}
+
+sub activate_storage {
+my ($class, $storeid, $scfg, $cache) = @_;
+return PVE::Storage::DirPlugin::activate_storage($class, $storeid, $scfg, 
$cache);
+}
+
+sub status {
+my ($class, $storeid, $scfg, $cache) = @_;
+return PVE::Storage::DirPlugin::status($class, $storeid, $scfg, $cache);
+}
+
+# TODO: sub get_volume_notes {}
+
+# TODO: sub update_volume_notes {}
+
+# croak would not include the caller from within this module
+sub __error {
+my ($msg) = @_;
+my (undef, $f, $n) = caller(1);
+die "$msg at $f: $n\n";
+}
+
+# Given a name (eg. `vm-VMID-disk-ID.raw`), take the part up to the format 
suffix as the name of
+# the subdirectory (subvolume).
+sub raw_name_to_dir($) {
+my ($raw) = @_;
+
+# For the subvolume directory Strip the `.` suffix:
+if ($raw =~ /^(.*)\.raw$/) {
+   return $1;
+}
+
+__error "intern

Re: [pve-devel] [RFC series 0/2] Show more vlan infos

2021-06-09 Thread Aaron Lauterer

bump, just in case this has been overlooked :)

On 4/12/21 3:14 PM, Aaron Lauterer wrote:

The main motivation here is to make the VLAN tags configured for an
interface better visible. The approach taken in this RFC is to use the
already existing vlan-id and vlan-raw-device values. These were only
present if the vlan device was configured with those explicit options,
available with ifupdown2.

For the other way, using the dot notation, the type was detected
correctly, but no further information about the vlan id and the used
device was present. Therefore the Inotify.pm has been changed to set the
same values for the dot notation interfaces. This results in the API
delivering the same information, not matter which type of vlan interface
it is.

Since the vlan-id and vlan-raw-device values are filtered for dot
notation interfaces when writing out the network config, I don't see
much harm here.

But should this approach be problematic for some reason that I have not
yet discovered, there is an alternative approach handling this in the
GUI only. Then the GUI would show the same information for both type of
vlan interfaces but the API would stay the same.

widget-toolkit: Aaron Lauterer (1):
   ui: network: add columns for vlan-id and vlan-raw-device

  src/node/NetworkView.js | 14 ++
  1 file changed, 14 insertions(+)

pve-common:  src/PVE/INotify.pm | 8 ++--
  1 file changed, 6 insertions(+), 2 deletions(-)





___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage] Revert "workaround zfs create -V error for unaligned sizes"

2021-06-09 Thread Aaron Lauterer
https://github.com/zfsonlinux/zfs/issues/8541 is solved and part of
openzfs 2.0 with [0]. Since we ship only ZFS 2.0 with PVE 7 we should be
okay to remove our workaround

[0] 
https://github.com/openzfs/zfs/commit/47c9299fcc9e5fb91d0b1636bfacc03bd3e98439

This reverts commit cdef3abb25984c369571626b38f97f92a0a2fd15.
---
 PVE/Storage/ZFSPoolPlugin.pm | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 2e2abe3..f719f42 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -313,12 +313,7 @@ sub zfs_get_pool_stats {
 
 sub zfs_create_zvol {
 my ($class, $scfg, $zvol, $size) = @_;
-
-# always align size to 1M as workaround until
-# https://github.com/zfsonlinux/zfs/issues/8541 is solved
-my $padding = (1024 - $size % 1024) % 1024;
-$size = $size + $padding;
-
+
 my $cmd = ['create'];
 
 push @$cmd, '-s' if $scfg->{sparse};
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC storage 2/7] add disk rename feature

2021-06-09 Thread Aaron Lauterer

Thanks for the review :)

On 6/2/21 10:36 AM, Fabian Ebner wrote:

Am 01.06.21 um 18:10 schrieb Aaron Lauterer:

Functionality has been added for the following storage types:


diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 93d09ce..6936abd 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -40,11 +40,11 @@ use PVE::Storage::ZFSPlugin;
  use PVE::Storage::PBSPlugin;
  # Storage API version. Increment it on changes in storage API interface.
-use constant APIVER => 8;
+use constant APIVER => 9;
  # Age is the number of versions we're backward compatible with.
  # This is like having 'current=APIVER' and age='APIAGE' in libtool,
  # see 
https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 7;
+use constant APIAGE => 8;
  # load standard plugins
  PVE::Storage::DirPlugin->register();
@@ -355,6 +355,7 @@ sub volume_snapshot_needs_fsfreeze {
  #    snapshot - taking a snapshot is possible
  #    sparseinit - volume is sparsely initialized
  #    template - conversion to base image is possible
+#    rename - renaming volumes is possible
  # $snap - check if the feature is supported for a given snapshot
  # $running - if the guest owning the volume is running
  # $opts - hash with further options:
@@ -1849,6 +1850,20 @@ sub complete_volume {
  return $res;
  }
+sub rename_volume {
+    my ($cfg, $source_volid, $source_vmid, $target_volname, $target_vmid) = @_;


Can't the vmid arguments be dropped and extracted directly from the 
volid/volname instead? Would prevent potential mismatch for careless callers.


Good point




+
+    my ($storeid) = parse_volume_id($source_volid);
+    my (undef, $source_volname, undef, $base_name, $base_vmid, $isBase, 
$format) = parse_volname($cfg, $source_volid);
+
+    activate_storage($cfg, $storeid);
+
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+


Check that target_volname is valid (by parsing) either here or within each 
plugin's implementation?


Will do




+    return $plugin->rename_volume($scfg, $storeid, $source_volname, 
$source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid);


Similar here, not all plugins need all of those parameters. Isn't taking $scfg, 
$storeid, $source_volid, $target_volname enough and let the plugin itself 
extract additional information if needed?


I'll look into it




+}
+



diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 4a10a1f..4e6e288 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -939,6 +939,7 @@ sub volume_has_feature {
    snap => {qcow2 => 1} },
  sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
  current => {qcow2 => 1, raw => 1, vmdk => 1} },
+    rename => { current =>{qcow2 => 1, raw => 1, vmdk => 1} },
  };
  # clone_image creates a qcow2 volume
@@ -946,6 +947,10 @@ sub volume_has_feature {
  defined($opts->{valid_target_formats}) &&
  !(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
+    return 0 if $feature eq 'rename'
+    && $class->can('api')
+    && $class->api() < 9;


Style-nit: multiline post-if


+
  my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
  $class->parse_volname($volname);
@@ -1463,4 +1468,28 @@ sub volume_import_formats {
  return ();
  }
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $source_vmid, 
$target_volname, $target_vmid, $base_name, $base_vmid) = @_;
+    die "not implemented in storage plugin '$class'\n" if $class->can('api') && 
$class->api() < 9;
+
+    my $basedir = $class->get_subdir($scfg, 'images');
+    my $sourcedir = "${basedir}/${source_vmid}";
+    my $targetdir = "${basedir}/${target_vmid}";
+
+    mkpath $targetdir;
+
+    my (undef, $format, undef) = parse_name_dir($source_volname);
+    $target_volname = "${target_volname}.${format}";


Ideally, the $target_volname already contains the format suffix at this point. 
Otherwise, passing a $target_volname with a format suffix results in a second 
suffix.



I need to look into that as well and do a few tests.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH pve-eslint] set cwd of CLIEngine to process.cwd()

2021-06-09 Thread Thomas Lamprecht
On 09.06.21 13:49, Dominik Csapak wrote:
> it seems it now defaults to '/' as the current working dir, but we
> assume the cwd of the process
> 
> Signed-off-by: Dominik Csapak 
> ---
>  src/app.js | 1 +
>  1 file changed, 1 insertion(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel