Am 7/30/19 um 2:42 PM schrieb Stoiko Ivanov:
> Since a while partial command completion ('qm re' yielding
> 'rescan reset resize resume' and completing to 'qm res')
> has not been working (broke with release of libpve-common-perl 5.0-33).
>
> The issue was introduced by setting the returned co
Am 8/9/19 um 1:13 PM schrieb Dominic Jäger:
> As explained in detail in bug 2293 [0] glusterfsscan is not working at the
> moment. Our function tries to contact a nfs server that is running inside
> gluster servers that does not respond.
>
> There is a command line utility simply called 'gluster
Am 8/6/19 um 2:41 PM schrieb Stefan Reiter:
> I don't see a reason to blanket-forbid excluding VMs in pool backups, so I
> felt leaving the API unchanged was the better option in this case. The GUI is
> the broken part, the API is working fine, albeit for a use-case it wasn't
> intentionally des
Am 8/12/19 um 2:50 PM schrieb Stefan Reiter:
> This was previously gated to CLI only, but it causes a vzdump job
> started with the newly introduced "Run Now" button to fail if it
> includes VMIDs on other nodes.
>
> Signed-off-by: Stefan Reiter
> ---
> PVE/API2/VZDump.pm | 2 +-
> 1 file change
Am 8/12/19 um 2:50 PM schrieb Stefan Reiter:
> Iterate all (online) nodes client-side and call vzdump with the correct
> parameters (according to the job selected) for each one.
>
> Then, show a progress bar in a non-closeable modal-dialog, to ensure the
> user stays on the Backup page during the
Am 8/12/19 um 2:50 PM schrieb Stefan Reiter:
> Whitespace removal and consolidating VZDump's job id format into a
> local variable.
>
> Signed-off-by: Stefan Reiter
> ---
> PVE/API2/Backup.pm| 25 +
> www/manager6/dc/Backup.js | 10 +-
> 2 files changed, 1
Am 7/18/19 um 4:44 PM schrieb Oguz Bektas:
> this patch enables to pass totp codes during cluster join if tfa has been
> enabled for root@pam (or any other user actually, but root seems to cause the
> most problems).
>
> u2f support is still not implemented.
>
> Co-developed-by: Thomas Lamprecht
On 8/14/19 12:03 PM, Thomas Lamprecht wrote:
Am 8/6/19 um 2:41 PM schrieb Stefan Reiter:
I don't see a reason to blanket-forbid excluding VMs in pool backups, so I felt
leaving the API unchanged was the better option in this case. The GUI is the
broken part, the API is working fine, albeit for
Am 8/14/19 um 3:23 PM schrieb Stefan Reiter:
>>
>> But then the WebGUI would need to be adapted to cope with such a case,
>> as currently adding excludes to a pool based backup job results in a
>> rather strange and wrong visualization, e.g., "All except 1074" here,
>> but all is not selected and e
Am 7/24/19 um 1:37 PM schrieb Fabian Grünbichler:
> and constant AT_EMPTY_PATH for chowning a directory/file opened via
> openat(2), for example when walking/creating a directory tree without
> following symlinks.
>
> Signed-off-by: Fabian Grünbichler
> ---
> src/PVE/Syscall.pm | 1 +
> src/PVE/
If you updated a job in "exclude" mode with some VMIDs specified to "pool" mode,
the backup job would retain the "exclude" section and thus not back up all VMs.
The GUI misrepresents this, showing that all VMs will be backed up or
straight up break and show "exclude" mode again, with the backend s
Signed-off-by: Dominik Csapak
---
PVE/API2/Qemu.pm | 4
1 file changed, 4 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index d59e23c..d7accbe 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1916,6 +1916,10 @@ __PACKAGE__->register_method({
{ subdir =>
Signed-off-by: Dominik Csapak
---
PVE/CLI/qm.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 031aa49..c759198 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -999,6 +999,8 @@ our $cmddef = {
shutdown => [ "PVE::API2::Qemu", 'vm_shutdown', ['vm
this creates a reboot trigger file (inspired by pve-container)
and relies on the 'qm cleanup' call by the qmeventd to detect
and restart the vm afterwards
Signed-off-by: Dominik Csapak
---
PVE/API2/Qemu.pm | 45 +
1 file changed, 45 insertions(+)
diff
this adds a reboot api call for vms, which uses a reboot trigger file
that gets detected by the qm cleanup call to start the vm again
this api call is useful when users want to apply pending hardware changes
without waiting for the vm to shutdown
i send this as rfc because i am not sure with the
if the reboot trigger file was set, start the vm again
Signed-off-by: Dominik Csapak
---
PVE/CLI/qm.pm | 13 +
1 file changed, 13 insertions(+)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 3aae23c..031aa49 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -763,6 +763,7 @@ __PACK
16 matches
Mail list logo