On 10/19/20 2:38 PM, Thomas Lamprecht wrote:
(FYI: forgot to hit reply-all, so resending this for the list)
On 19.10.20 12:53, Fabian Ebner wrote:
If a guest is removed without purge, the ID will remain
in the backup configuration. Avoid using the variable $node
when it is potentially undefined
Hi,
On a fully patched CentOS 8 stream box I get only the major release number. The
whole purpose of
the stream system is that there are no minor versions and all packages are
directly going into the
major version for continuous delivery..
root@ansible:~# rpm -ql centos-stream-release
/etc/cent
this fixes an issue where a rogue running backup would upload the vm
config of a later backup in a backup job
instead now that directory gets deleted and the config is not
available anymore
we cannot really keep those directories around until the end of the
backup job, since we temporarily save c
if the 'backup' qmp call itself times out or fails, we still want to
try to cancel the backup, else it can happen that there is still
a backup running even when vzdump thinks it was canceled
qapi docs says that backup cancel always returns success, even
if no backup is running
since we hold a glo
(FYI: forgot to hit reply-all, so resending this for the list)
On 19.10.20 12:53, Fabian Ebner wrote:
> If a guest is removed without purge, the ID will remain
> in the backup configuration. Avoid using the variable $node
> when it is potentially undefined. Instead, skip non-existing
> guests and
A user with Datastore.AllocateSpace, VM.Audit, VM.Backup
privileges can already remove backups from the GUI manually,
so it shouldn't be a problem if they can set the remove flag
when starting a manual vzdump job in the GUI.
Signed-off-by: Fabian Ebner
---
www/manager6/window/Backup.js | 17
The initial default from the $confdesc is 1 anyways, and like
this, changing the default in /etc/vzdump.conf to 0 actually works.
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 2 --
1 file changed, 2 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 542228d6..3ccb8269 100644
--- a
Signed-off-by: Fabian Ebner
---
PVE/VZDump/Common.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm
index 63a4689..6ae35e6 100644
--- a/PVE/VZDump/Common.pm
+++ b/PVE/VZDump/Common.pm
@@ -221,7 +221,8 @@ my $confdesc = {
}),
On 17.10.20 15:45, Achim Dreyer wrote:
> Signed-off-by: Achim Dreyer
> ---
> src/PVE/LXC/Setup/CentOS.pm | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/src/PVE/LXC/Setup/CentOS.pm b/src/PVE/LXC/Setup/CentOS.pm
> index 0825273..77eb6f7 100644
> --- a/src/PVE/LXC/Setup/Ce
Ignore shutdowns triggered from within the guest in favor of detecting
them via qmeventd and stopping the QEMU process that way.
Signed-off-by: Stefan Reiter
---
v2:
* as part of rebase: include newer tests (bootorder) in update
PVE/QemuServer.pm | 2 ++
tes
'alarm' is used to schedule an additionaly cleanup round 5 seconds after
sending SIGTERM via terminate_client. This then sends SIGKILL via a
pidfd (if supported by the kernel) or directly via kill, making sure
that the QEMU process is *really* dead and won't be left behind in an
undetermined state.
We take care of killing QEMU processes when a guest shuts down manually.
QEMU will not exit itself, if started with -no-shutdown, but it will
still emit a "SHUTDOWN" event, which we await and then send SIGTERM.
This additionally allows us to handle backups in such situations. A
vzdump instance wil
If a guest's QEMU process is 'running', but QMP says 'shutdown' or
'prelaunch', the VM is ready to be booted anew, so we can show the
button.
The 'shutdown' button is intentionally not touched, as we always want to
give the user the ability to 'stop' a VM (and thus kill any potentially
leftover pr
Use QEMU's -no-shutdown argument so the QEMU instance stays alive even if the
guest shuts down. This allows running backups to continue.
To handle cleanup of QEMU processes, this series extends the qmeventd to handle
SHUTDOWN events not just for detecting guest triggered shutdowns, but also to
cle
When the VM is in status 'shutdown', i.e. after the guest issues a
powerdown while a backup is running, QEMU requires a 'system_reset' to
be issued before 'cont' can boot the guest again.
Additionally, when the VM has been powered down during a backup, the
logically correct call would be a 'vm_sta
Now that VMs can be started during a backup, it makes sense to create a
dirty bitmap in these cases too, since the VM might be resumed and thus
continue running normally even after the backup is done.
Signed-off-by: Stefan Reiter
---
PVE/VZDump/QemuServer.pm | 5 ++---
1 file changed, 2 insertio
Connect and send the vmid of the VM being backed up. This prevents
qmeventd from SIGTERMing the underlying QEMU instance, even if the guest
shuts itself down, until we close the socket connection (in cleanup,
which happens on success and abort, or if we crash the file handle will
be closed as well)
>>There are no filters implemented yet, and there does not seem to be a
>>way to filter by interface. So if we want to limit the conntracks to
>>certain VMs, we could use zones and add a filter for them.
>>We would have to map them somehow though as the zone parameter is only
>>16 bits and VMIDs mi
If a guest is removed without purge, the ID will remain
in the backup configuration. Avoid using the variable $node
when it is potentially undefined. Instead, skip non-existing
guests and warn the user.
Reported here:
https://forum.proxmox.com/threads/purge-backup-does-not-remove-vm-from-datacente
after creation, so that users don't need to go the ceph tooling route.
Separate common pool options to reuse them in other places.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph.pm | 98 ++
PVE/CLI/pveceph.pm | 1 +
2 files changed, 99 insertions(+
to reduce code duplication and make it easier to add more options for
pool commands.
Use a new rados object for each 'osd pool set', as each command can set
an option independent of the previous commands success/failure. On
failure a new rados object would need to be created and that will
confuse
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph.pm | 8
1 file changed, 8 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 0aeb5075..69fe3d6d 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -718,6 +718,13 @@ my $ceph_pool_common_options = sub {
en
to keep the pool create & set in sync.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph.pm | 40 +---
1 file changed, 1 insertion(+), 39 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 7cdbdccd..0aeb5075 100644
--- a/PVE/API2/Ceph.pm
+++ b/
Signed-off-by: Alwin Antreich
---
www/manager6/ceph/Pool.js | 13 +
1 file changed, 13 insertions(+)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 19eb01e9..11bcf9d5 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -39,6 +39,19 @@ E
I haven't done any performance tests yet. But currently we query all
conntracks (same as conntrack -L), print them one by one as JSON to STDOUT.
When importing we do it line-by-line, which means one conntrack at a
time. But if necessary we could batch them, as mentioned in the
bugtracker, by usi
25 matches
Mail list logo