[pve-devel] [PATCH V2 pve-common 2/4] Cgroups: remove specific lxc code

2020-11-06 Thread Alexandre Derumier
--- src/PVE/CGroup.pm | 43 ++- 1 file changed, 6 insertions(+), 37 deletions(-) diff --git a/src/PVE/CGroup.pm b/src/PVE/CGroup.pm index 7e12af9..45b9e7c 100644 --- a/src/PVE/CGroup.pm +++ b/src/PVE/CGroup.pm @@ -22,8 +22,6 @@ use PVE::Tools qw( file_

[pve-devel] [PATCH V2 pve-common 3/4] bugfix: cpushares : default value is 1024 for cgroup v1

2020-11-06 Thread Alexandre Derumier
Currently if we delete cpuunit (undef), the default value is 100 --- src/PVE/CGroup.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/PVE/CGroup.pm b/src/PVE/CGroup.pm index 45b9e7c..71d0846 100644 --- a/src/PVE/CGroup.pm +++ b/src/PVE/CGroup.pm @@ -472,7 +472,7 @@ sub cha

[pve-devel] [PATCH V2 pve-common 4/4] systemd: add CPUWeight encoding

2020-11-06 Thread Alexandre Derumier
--- src/PVE/Systemd.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/PVE/Systemd.pm b/src/PVE/Systemd.pm index 85b35a3..ed13a60 100644 --- a/src/PVE/Systemd.pm +++ b/src/PVE/Systemd.pm @@ -105,7 +105,7 @@ sub enter_systemd_scope { foreach my $key (keys %extra) {

[pve-devel] [PATCH V2 pve-common 0/4] add generic CGroup module

2020-11-06 Thread Alexandre Derumier
This move Cgroup module from LXC to pve-common, to be able to use it in qemu-server too. (and add support for cgroupv2 only for qemu) I have also included a bugfix for cpushares from cgroup v1, when value is not defined,it's currently set to 100 instead 1024. (can be trigerred by pct --delete cpuu

[pve-devel] [PATCH V2 pve-common 1/4] move PVE::LXC::CGroup to PVE::CGroup

2020-11-06 Thread Alexandre Derumier
--- src/Makefile | 1 + src/PVE/CGroup.pm | 582 ++ 2 files changed, 583 insertions(+) create mode 100644 src/PVE/CGroup.pm diff --git a/src/Makefile b/src/Makefile index 1987d0e..b2a4ac6 100644 --- a/src/Makefile +++ b/src/Makefile @@ -13,6 +13

[pve-devel] applied-series: [PATCH V2 pve-common 0/4] add generic CGroup module

2020-11-06 Thread Thomas Lamprecht
On 06.11.20 09:24, Alexandre Derumier wrote: > This move Cgroup module from LXC to pve-common, > to be able to use it in qemu-server too. > (and add support for cgroupv2 only for qemu) > > I have also included a bugfix for cpushares from cgroup v1, > when value is not defined,it's currently set to

Re: [pve-devel] [PATCH manager] restore default value of 0 for remove/maxfiles

2020-11-06 Thread Fabian Ebner
AFAICT the previous behavior is maxfiles = 1 when it's not set anywhere. That's the default value in the VZDump schema. And that should happen a bit below in the code: if (!defined($opts->{'prune-backups'})) { my $maxfiles = delete $opts->{maxfiles} // $defaults->{maxfiles};

[pve-devel] applied: [PATCH manager] restore default value of 0 for remove/maxfiles

2020-11-06 Thread Thomas Lamprecht
On 05.11.20 16:21, Stefan Reiter wrote: > If neither the 'remove' option of vzdump nor the 'maxfiles' option in > the storage config are set, assume a value of 0, i.e. do not delete > anything and allow unlimited backups. > > Restores previous behaviour that was broken in 7ab7d6f15f. > > Also fix

[pve-devel] [PATCH i18n 0/4] de: Update translation

2020-11-06 Thread Dominic Jäger
Split into less and more disputable parts. Feel free to squash. Dominic Jäger (4): de: Add missing and outdated translation de: Add translation for "Task" de: Add translation for "Service" de: Add translation for "transfer rate" de.po | 125 ---

[pve-devel] [PATCH i18n 4/4] de: Add translation for "transfer rate"

2020-11-06 Thread Dominic Jäger
Signed-off-by: Dominic Jäger --- de.po | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/de.po b/de.po index fffa3cc..7d14761 100644 --- a/de.po +++ b/de.po @@ -5195,7 +5195,7 @@ msgstr "" #: proxmox-backup/www/ServerStatus.js:158 msgid "Root Disk Transfer Rate (bytes/se

[pve-devel] [PATCH i18n 2/4] de: Add translation for "Task"

2020-11-06 Thread Dominic Jäger
Signed-off-by: Dominic Jäger --- de.po | 22 ++ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/de.po b/de.po index 9df3cd4..e3f7e6e 100644 --- a/de.po +++ b/de.po @@ -3256,9 +3256,8 @@ msgstr "Logs" #: proxmox-backup/www/Dashboard.js:316 #: proxmox-backup/

[pve-devel] [PATCH i18n 1/4] de: Add missing and outdated translations

2020-11-06 Thread Dominic Jäger
Signed-off-by: Dominic Jäger --- de.po | 91 +-- 1 file changed, 45 insertions(+), 46 deletions(-) diff --git a/de.po b/de.po index a9a65df..9df3cd4 100644 --- a/de.po +++ b/de.po @@ -267,7 +267,7 @@ msgstr "Alle Funktionen" #: proxmox-b

[pve-devel] [PATCH i18n 3/4] de: Add translation for "Service"

2020-11-06 Thread Dominic Jäger
Signed-off-by: Dominic Jäger --- de.po | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/de.po b/de.po index e3f7e6e..fffa3cc 100644 --- a/de.po +++ b/de.po @@ -3990,7 +3990,7 @@ msgstr "Keine laufenden Aufgaben" #: pve-manager/www/manager6/ceph/ServiceList.js:55 msg

[pve-devel] [PATCH container v2 2/2] snapshot creation: fsfreeze mountpoints, if needed

2020-11-06 Thread Stoiko Ivanov
fixes #2991, #2528. creating a snapshot with rbd, after the syncfs finished successfully does not guarantee that the snapshot has the state of the filesystem after syncfs. suggestion taken from #2528 (running fsfreeze -f/-u before snapshotting on the mountpoints) added helper PVE::Storage::volum

[pve-devel] [PATCH container v2 1/2] add fsfreeze helper:

2020-11-06 Thread Stoiko Ivanov
fsfreeze_mountpoint issues the same ioctl's as fsfreeze(8) on the provided directory (the $thaw parameter deciding between '--freeze' and '--unfreeze') This is used for container backups on RBD, where snapshots on containers, which are heavy on IO, are not mountable readonly, because the ext4 is n

[pve-devel] [PATCH container/storage v2] add fsfreeze/thaw for rbd snapshots

2020-11-06 Thread Stoiko Ivanov
this patchset addresses #2991 and #2528. v1->v2: mostly incorporated Thomas' feedback (huge thanks!!): * moved fsfreeze from pve-common to pve-container (it's only used here, and it avoids one versioned dependency bump). * for this needed to drop O_CLOEXEC (only defined in PVE::Tools) flag from

[pve-devel] [PATCH storage v2 2/2] add check for fsfreeze before snapshot

2020-11-06 Thread Stoiko Ivanov
In order to take a snapshot of a container volume, which can be mounted read-only with RBD, the volume needs to be frozen (fsfreeze (8)) before taking the snapshot. This commit adds helpers to determine if the FIFREEZE ioctl needs to be called for the volume. Signed-off-by: Stoiko Ivanov --- PV

[pve-devel] [PATCH storage v2 1/2] fix typo in comment

2020-11-06 Thread Stoiko Ivanov
Signed-off-by: Stoiko Ivanov --- PVE/Storage.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index cd7b5ff..ad10827 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -40,7 +40,7 @@ use PVE::Storage::ZFSPlugin; use PVE::Storage::DRBDPlugin;

[pve-devel] [PATCH container v3 2/2] snapshot creation: fsfreeze mountpoints, if needed

2020-11-06 Thread Stoiko Ivanov
fixes #2991, #2528. creating a snapshot with rbd, after the syncfs finished successfully does not guarantee that the snapshot has the state of the filesystem after syncfs. suggestion taken from #2528 (running fsfreeze -f/-u before snapshotting on the mountpoints) added helper PVE::Storage::volum

[pve-devel] [PATCH storage v3 2/2] add check for fsfreeze before snapshot

2020-11-06 Thread Stoiko Ivanov
In order to take a snapshot of a container volume, which can be mounted read-only with RBD, the volume needs to be frozen (fsfreeze (8)) before taking the snapshot. This commit adds helpers to determine if the FIFREEZE ioctl needs to be called for the volume. Signed-off-by: Stoiko Ivanov --- PV

[pve-devel] [PATCH container v3 1/2] add fsfreeze helper:

2020-11-06 Thread Stoiko Ivanov
fsfreeze_mountpoint issues the same ioctls as fsfreeze(8) on the provided directory (the $thaw parameter deciding between '--freeze' and '--unfreeze') This is used for container backups on RBD, where snapshots on containers, which are heavy on IO, are not mountable readonly, because the ext4 is no

[pve-devel] [PATCH storage v3 1/2] fix typo in comment

2020-11-06 Thread Stoiko Ivanov
Signed-off-by: Stoiko Ivanov --- PVE/Storage.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index cd7b5ff..ad10827 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -40,7 +40,7 @@ use PVE::Storage::ZFSPlugin; use PVE::Storage::DRBDPlugin;

[pve-devel] [PATCH container/storage v3] add fsfreeze/thaw for rbd snapshots

2020-11-06 Thread Stoiko Ivanov
this patchset addresses #2991 and #2528. v2->v3: * incoroprated Wolfgang's feedback (huge thanks!!): ** /proc/$pid/root contains as magic-link [0] to the container's rootfs - so use these for the FIFREEZE/FITHAW ioctls instead of fork+nsenter ** thus moved the fsfreeze_mountpoint to PVE::LXC::C

[pve-devel] [RFC v2 container 3/5] deactivate volumes after storage_migrate

2020-11-06 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- This is probably not worth it, for two reasons: 1. only local unused volumes are not already deactivated by the existing code 2. if nothing else goes wrong, the volumes migrated with storage_migrate will be deleted anyways src/PVE/LXC/Migrate.pm | 5 + 1 file

[pve-devel] [PATCH v2 storage 1/5] fix #3030: always activate volumes in storage_migrate

2020-11-06 Thread Fabian Ebner
AFAICT the snapshot activation is not necessary for our plugins at the moment, but it doesn't really hurt and might be relevant in the future or for external plugins. Deactivating volumes is up to the caller, because for example, for replication on a running guest, we obviously don't want to deact

[pve-devel] [RFC v2 qemu-server 4/5] adapt to new storage_migrate activation behavior

2020-11-06 Thread Fabian Ebner
Offline migrated volumes are now activated within storage_migrate. Online migrated volumes can be assumed to be already active. Signed-off-by: Fabian Ebner --- dependency bump needed Sent as RFC, because I'm not completly sure if this is fine here. Is the assumption about online volumes correct

[pve-devel] [RFC v2 qemu-server 5/5] deactivate volumes after storage_migrate

2020-11-06 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- same comment as for the corresponding LXC patch PVE/QemuMigrate.pm | 5 + 1 file changed, 5 insertions(+) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index f2c2b07..10cf31a 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -569,6 +569,11 @

[pve-devel] [PATCH v2 container 2/5] adapt to new storage_migrate activation behavior

2020-11-06 Thread Fabian Ebner
Every local volume is migrated via storage_migrate and activated there, so there is no need to do it in prepare() anymore. Signed-off-by: Fabian Ebner --- dependency bump needed I only found run_replication as a potential place that might need active local volumes, but that also uses storage_mi

[pve-devel] partially-applied: [PATCH i18n 0/4] de: Update translation

2020-11-06 Thread Thomas Lamprecht
On 06.11.20 11:47, Dominic Jäger wrote: > Split into less and more disputable parts. Feel free to squash. > > Dominic Jäger (4): > de: Add missing and outdated translation > de: Add translation for "Task" > de: Add translation for "Service" > de: Add translation for "transfer rate" > > d