Am 01.04.25 um 12:19 schrieb Dominik Csapak: > while i also agree to all said here, I have one counter point to offer: > > In the case that such an operation is necessary (e.g. HA is not > wanted/needed/possible > for what ever reason), the user will fall back to do it manually (iow. 'mv > source target') > which is at least as dangerous as exposing over the API, since > > * now the admins sharing the system must share root@pam credentials > (ssh/console access) > (alternatively setup sudo, which has it's own problems)
Setups with many admins need to handle already how they can log in as root, be it through a jump user (`doas` is a thing if sudo is deemed to complex), some identity provider (LDAP, OIDC, ... with PAM configuration), as root operations are required for other things too. > * it promotes manually modifying /etc/pve/ content Yeah, as that's what's actually required after manual assessment, abstracting that away won't really bring a big benefit IMO. > > * any error could be even more fatal than if done via the API > (e.g. mv of the wrong file, from the wrong node, etc.) This cannot be said for sure, these are unknown unknowns. FWIW, the API could make it worse too compared to an admin carefully fixing this according to the needs of a specific situation at hand. > IMHO ways forward for this scenario could be: > > * use cluster level locking only for config move? (not sure if performance is > still > a concern for this action, since parallel moves don't happen too much?) What does this solve? The old node is still in an unknown state and does not sees any pmxcfs changes at all. The VM can still run and cause issues with duplicate unsynchronized resource access and all the other woes that can happen if the same guest runs twice. > > * provide a special CLI tool/cmd to deal with that -> would minimize potential > errors but is still contained to root equivalent users This would still have your own arguments w.r.t. root login speaking against that. And it would not be that big of a difference as for local involved resources the tool cannot work if the source node cannot be talked with and for all-shared resources the simple config move is as safe as such a tool would get in the context of a dead source node, as for either the admin must ensure it's actually dead. > * link to the doc section for it from the UI with a big caveat > https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_recovery As Fabian wrote, such disclaimers might be nice for shifting the blame but are not enough in practice for such an operation. And Fabians point wasn't that doing it on the CLI is less dangerous, its about the same either way, but that exposing this as well-integrated feature makes it seem much less dangerous to the user, especially those that are less experienced and should be stumped and ask some support channel for help. That said, the actual first step to move this forward would IMO be to create an extensive documentation/how-to for how such things can be resolved and what one needs to watch out for, sort of check-list style might be a good format. As that alone should help users a lot already, and that would also make it much clearer what a more integrated (semi-automated) way could look like. Which could be a check tool that helps with assessing the recovery depending on config, storage (types), network, mappings, ... which would ensure that common issues/blockers are not missed and will even help experienced admins. If that cannot be first documented and then optionally transformed into a hands-off evaluation checker tool, or if that's deemed to not help users, I really do not see how an API integrated solution can do so without just hand-waving all actual and real issues for why this does not already exists away. _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel