---
src/PVE/CGroup.pm | 43 ++-
1 file changed, 6 insertions(+), 37 deletions(-)
diff --git a/src/PVE/CGroup.pm b/src/PVE/CGroup.pm
index 7e12af9..45b9e7c 100644
--- a/src/PVE/CGroup.pm
+++ b/src/PVE/CGroup.pm
@@ -22,8 +22,6 @@ use PVE::Tools qw(
file_
Currently if we delete cpuunit (undef), the default value is 100
---
src/PVE/CGroup.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/CGroup.pm b/src/PVE/CGroup.pm
index 45b9e7c..71d0846 100644
--- a/src/PVE/CGroup.pm
+++ b/src/PVE/CGroup.pm
@@ -472,7 +472,7 @@ sub cha
---
src/PVE/Systemd.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/Systemd.pm b/src/PVE/Systemd.pm
index 85b35a3..ed13a60 100644
--- a/src/PVE/Systemd.pm
+++ b/src/PVE/Systemd.pm
@@ -105,7 +105,7 @@ sub enter_systemd_scope {
foreach my $key (keys %extra) {
This move Cgroup module from LXC to pve-common,
to be able to use it in qemu-server too.
(and add support for cgroupv2 only for qemu)
I have also included a bugfix for cpushares from cgroup v1,
when value is not defined,it's currently set to 100 instead 1024.
(can be trigerred by pct --delete cpuu
---
src/Makefile | 1 +
src/PVE/CGroup.pm | 582 ++
2 files changed, 583 insertions(+)
create mode 100644 src/PVE/CGroup.pm
diff --git a/src/Makefile b/src/Makefile
index 1987d0e..b2a4ac6 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -13,6 +13
On 06.11.20 09:24, Alexandre Derumier wrote:
> This move Cgroup module from LXC to pve-common,
> to be able to use it in qemu-server too.
> (and add support for cgroupv2 only for qemu)
>
> I have also included a bugfix for cpushares from cgroup v1,
> when value is not defined,it's currently set to
AFAICT the previous behavior is maxfiles = 1 when it's not set anywhere.
That's the default value in the VZDump schema.
And that should happen a bit below in the code:
if (!defined($opts->{'prune-backups'})) {
my $maxfiles = delete $opts->{maxfiles} // $defaults->{maxfiles};
On 05.11.20 16:21, Stefan Reiter wrote:
> If neither the 'remove' option of vzdump nor the 'maxfiles' option in
> the storage config are set, assume a value of 0, i.e. do not delete
> anything and allow unlimited backups.
>
> Restores previous behaviour that was broken in 7ab7d6f15f.
>
> Also fix
Split into less and more disputable parts. Feel free to squash.
Dominic Jäger (4):
de: Add missing and outdated translation
de: Add translation for "Task"
de: Add translation for "Service"
de: Add translation for "transfer rate"
de.po | 125 ---
Signed-off-by: Dominic Jäger
---
de.po | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/de.po b/de.po
index fffa3cc..7d14761 100644
--- a/de.po
+++ b/de.po
@@ -5195,7 +5195,7 @@ msgstr ""
#: proxmox-backup/www/ServerStatus.js:158
msgid "Root Disk Transfer Rate (bytes/se
Signed-off-by: Dominic Jäger
---
de.po | 22 ++
1 file changed, 10 insertions(+), 12 deletions(-)
diff --git a/de.po b/de.po
index 9df3cd4..e3f7e6e 100644
--- a/de.po
+++ b/de.po
@@ -3256,9 +3256,8 @@ msgstr "Logs"
#: proxmox-backup/www/Dashboard.js:316
#: proxmox-backup/
Signed-off-by: Dominic Jäger
---
de.po | 91 +--
1 file changed, 45 insertions(+), 46 deletions(-)
diff --git a/de.po b/de.po
index a9a65df..9df3cd4 100644
--- a/de.po
+++ b/de.po
@@ -267,7 +267,7 @@ msgstr "Alle Funktionen"
#: proxmox-b
Signed-off-by: Dominic Jäger
---
de.po | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/de.po b/de.po
index e3f7e6e..fffa3cc 100644
--- a/de.po
+++ b/de.po
@@ -3990,7 +3990,7 @@ msgstr "Keine laufenden Aufgaben"
#: pve-manager/www/manager6/ceph/ServiceList.js:55
msg
fixes #2991, #2528.
creating a snapshot with rbd, after the syncfs finished successfully does not
guarantee that the snapshot has the state of the filesystem after syncfs.
suggestion taken from #2528 (running fsfreeze -f/-u before snapshotting on
the mountpoints)
added helper PVE::Storage::volum
fsfreeze_mountpoint issues the same ioctl's as fsfreeze(8) on the provided
directory (the $thaw parameter deciding between '--freeze' and '--unfreeze')
This is used for container backups on RBD, where snapshots on containers,
which are heavy on IO, are not mountable readonly, because the ext4 is n
this patchset addresses #2991 and #2528.
v1->v2:
mostly incorporated Thomas' feedback (huge thanks!!):
* moved fsfreeze from pve-common to pve-container (it's only used here, and
it avoids one versioned dependency bump).
* for this needed to drop O_CLOEXEC (only defined in PVE::Tools) flag from
In order to take a snapshot of a container volume, which can be mounted
read-only with RBD, the volume needs to be frozen (fsfreeze (8)) before taking
the snapshot.
This commit adds helpers to determine if the FIFREEZE ioctl needs to be called
for the volume.
Signed-off-by: Stoiko Ivanov
---
PV
Signed-off-by: Stoiko Ivanov
---
PVE/Storage.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index cd7b5ff..ad10827 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -40,7 +40,7 @@ use PVE::Storage::ZFSPlugin;
use PVE::Storage::DRBDPlugin;
fixes #2991, #2528.
creating a snapshot with rbd, after the syncfs finished successfully does not
guarantee that the snapshot has the state of the filesystem after syncfs.
suggestion taken from #2528 (running fsfreeze -f/-u before snapshotting on
the mountpoints)
added helper PVE::Storage::volum
In order to take a snapshot of a container volume, which can be mounted
read-only with RBD, the volume needs to be frozen (fsfreeze (8)) before taking
the snapshot.
This commit adds helpers to determine if the FIFREEZE ioctl needs to be called
for the volume.
Signed-off-by: Stoiko Ivanov
---
PV
fsfreeze_mountpoint issues the same ioctls as fsfreeze(8) on the provided
directory (the $thaw parameter deciding between '--freeze' and '--unfreeze')
This is used for container backups on RBD, where snapshots on containers,
which are heavy on IO, are not mountable readonly, because the ext4 is no
Signed-off-by: Stoiko Ivanov
---
PVE/Storage.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index cd7b5ff..ad10827 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -40,7 +40,7 @@ use PVE::Storage::ZFSPlugin;
use PVE::Storage::DRBDPlugin;
this patchset addresses #2991 and #2528.
v2->v3:
* incoroprated Wolfgang's feedback (huge thanks!!):
** /proc/$pid/root contains as magic-link [0] to the container's rootfs -
so use these for the FIFREEZE/FITHAW ioctls instead of fork+nsenter
** thus moved the fsfreeze_mountpoint to PVE::LXC::C
Signed-off-by: Fabian Ebner
---
This is probably not worth it, for two reasons:
1. only local unused volumes are not already deactivated by the existing code
2. if nothing else goes wrong, the volumes migrated with storage_migrate
will be deleted anyways
src/PVE/LXC/Migrate.pm | 5 +
1 file
AFAICT the snapshot activation is not necessary for our plugins at the moment,
but it doesn't really hurt and might be relevant in the future or for external
plugins.
Deactivating volumes is up to the caller, because for example, for replication
on a running guest, we obviously don't want to deact
Offline migrated volumes are now activated within storage_migrate.
Online migrated volumes can be assumed to be already active.
Signed-off-by: Fabian Ebner
---
dependency bump needed
Sent as RFC, because I'm not completly sure if this is fine here.
Is the assumption about online volumes correct
Signed-off-by: Fabian Ebner
---
same comment as for the corresponding LXC patch
PVE/QemuMigrate.pm | 5 +
1 file changed, 5 insertions(+)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index f2c2b07..10cf31a 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -569,6 +569,11 @
Every local volume is migrated via storage_migrate and activated there,
so there is no need to do it in prepare() anymore.
Signed-off-by: Fabian Ebner
---
dependency bump needed
I only found run_replication as a potential place that might need
active local volumes, but that also uses storage_mi
On 06.11.20 11:47, Dominic Jäger wrote:
> Split into less and more disputable parts. Feel free to squash.
>
> Dominic Jäger (4):
> de: Add missing and outdated translation
> de: Add translation for "Task"
> de: Add translation for "Service"
> de: Add translation for "transfer rate"
>
> d
29 matches
Mail list logo