entry point for the remote migration on the source side, mainly
preparing the API client that gets passed to the actual migration code
and doing some parameter parsing.
querying of the remote sides resources (like available storages, free
VMIDs, lookup of endpoint details for specific nodes, ...)
this format comes from the remote cluster, so it might not be supported
on the source side - checking whether it's known (as additional
safeguard) and untainting (to avoid open3 failure) is required.
Signed-off-by: Fabian Grünbichler
---
Notes:
v6: new
PVE/CLI/pvesm.pm | 6 ++
PVE/Stor
Signed-off-by: Fabian Grünbichler
---
pveum.adoc | 1 +
1 file changed, 1 insertion(+)
diff --git a/pveum.adoc b/pveum.adoc
index 64d8931..cbd553a 100644
--- a/pveum.adoc
+++ b/pveum.adoc
@@ -753,6 +753,7 @@ Node / System related privileges::
* `Sys.Syslog`: view syslog
* `Sys.Audit`: view nod
remote migration uses a websocket connection to a task worker running on
the target node instead of commands via SSH to control the migration.
this websocket tunnel is started earlier than the SSH tunnel, and allows
adding UNIX-socket forwarding over additional websocket connections
on-demand.
the
which wraps the remote_migrate_vm API endpoint, but does the
precondition checks that can be done up front itself.
this now just leaves the FP retrieval and target node name lookup to the
sync part of the API endpoint, which should be do-able in <30s ..
an example invocation:
$ qm remote-migrate
the following two endpoints are used for migration on the remote side
POST /nodes/NODE/qemu/VMID/mtunnel
which creates and locks an empty VM config, and spawns the main qmtunnel
worker which binds to a VM-specific UNIX socket.
this worker handles JSON-encoded migration commands coming in via thi
no semantic changes intended, except for:
- no longer passing the main migration UNIX socket to SSH twice for
forwarding
- dropping the 'unix:' prefix in start_remote_tunnel's timeout error message
Signed-off-by: Fabian Grünbichler
---
Notes:
v6:
- rport/port
- properly conditionaliz
modelled after the VM migration, but folded into a single commit since
the actual migration changes are a lot smaller here.
Signed-off-by: Fabian Grünbichler
---
Notes:
v6:
- check for Sys.Incoming in mtunnel API endpoint
- mark as experimental
- test_mp fix for non-snapshot call
for proper re-use in pve-container.
Signed-off-by: Fabian Grünbichler
Reviewed-by: Fiona Ebner
---
Notes:
requires versioned dependency on pve-common that has taken over the option
new in v6 / follow-up to v5
PVE/QemuServer.pm | 7 ---
1 file changed, 7 deletions(-)
diff --g
works the same as `qm remote-migrate`, with the addition of `--restart`
and `--timeout` parameters.
Signed-off-by: Fabian Grünbichler
---
Notes:
v6: new
src/PVE/CLI/pct.pm | 124 +
1 file changed, 124 insertions(+)
diff --git a/src/PVE/CLI/pct.p
since that is the ID on the target node..
Signed-off-by: Fabian Grünbichler
---
src/PVE/LXC/Migrate.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index a0ab65e..ca1dd08 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@
this series adds remote migration for VMs and CTs.
both live and offline migration of VMs including NBD and
storage-migrated disks should work, containers don't have any live
migration so both offline and restart mode work identical except for the
restart part.
groundwork for extending to pvesr a
from qemu-server, for re-use in pve-container.
Signed-off-by: Fabian Grünbichler
Reviewed-by: Fiona Ebner
---
Notes:
requires versioned breaks on old qemu-server containing the option, to avoid
registering twice
new in v6/follow-up to v5
src/PVE/JSONSchema.pm | 7 +++
1 f
for guarding cross-cluster data streams like guest migrations and
storage migrations.
Signed-off-by: Fabian Grünbichler
---
src/PVE/AccessControl.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
index c32dcc3..2dcb897 100644
--- a/src/PVE
On 28.09.2022 11:37, Stefan Hrdlicka wrote:
if (freeId !== undefined) {
- busField.setValue(freeId.controller);
+ if (busField !== undefined) {
+ busField.setValue(freeId.controller);
+ }
nit: IMO, optional chaining (?.) would make this more re
Signed-off-by: Stefan Hrdlicka
---
FYI: When IDE already has 4 devices and the user tries to add another one,
the number device number isn't changed since there isn't any space
left.
www/manager6/form/ControllerSelector.js | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git
Signed-off-by: Stefan Hrdlicka
---
www/manager6/form/ControllerSelector.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/form/ControllerSelector.js
b/www/manager6/form/ControllerSelector.js
index 6daede95..e6baa893 100644
--- a/www/manager6/form/ControllerSelec
When adding a disk to an existing VM and switching between SCSI and IDE
(or any other bus) the gui will now select the next free device id
automatically.
Stefan Hrdlicka (2):
fix #1981: get next free disk id on change of bus/device
cleanup: style fix
www/manager6/form/ControllerSelector.js
When renaming a group, the usages didn't get updated automatically. To
get around problems with atomicity, the old rule is first cloned with the
new name, the usages are updated and only when updating has finished, the
old rule is deleted.
The subroutines that lock/update host configs had to be ch
The kvm man page mentions that using 'c' will try booting from the
first hard disk, so the current '(no bootdisk)' text in the UI is not
accurate, and boot can still succeed.
Reported in the community forum:
https://forum.proxmox.com/threads/115800/
Signed-off-by: Fiona Ebner
---
An alternative
20 matches
Mail list logo