On 04.10.21 17:29, Stefan Reiter wrote:
> Starts an instance of swtpm per VM in it's systemd scope, it will
> terminate by itself if the VM exits, or be terminated manually if
> startup fails.
> 
> Before first use, a TPM state is created via swtpm_setup. State is
> stored in a 'tpmstate0' volume, treated much the same way as an efidisk.
> 
> It is migrated 'offline', the important part here is the creation of the
> target volume, the actual data transfer happens via the QEMU device
> state migration process.
> 
> Move-disk can only work offline, as the disk is not registered with
> QEMU, so 'drive-mirror' wouldn't work. swtpm itself has no method of
> moving a backing storage at runtime.
> 
> For backups, a bit of a workaround is necessary (this may later be
> replaced by NBD support in swtpm): During the backup, we attach the
> backing file of the TPM as a read-only drive to QEMU, so our backup
> code can detect it as a block device and back it up as such, while
> ensuring consistency with the rest of disk state ("snapshot" semantic).
> 
> The name for the ephemeral drive is specifically chosen as
> 'drive-tpmstate0-backup', diverging from our usual naming scheme with
> the '-backup' suffix, to avoid it ever being treated as a regular drive
> from the rest of the stack in case it gets left over after a backup for
> some reason (shouldn't happen).
> 
> Signed-off-by: Stefan Reiter <s.rei...@proxmox.com>
> ---
>  PVE/API2/Qemu.pm         |   5 ++
>  PVE/QemuMigrate.pm       |  14 +++-
>  PVE/QemuServer.pm        | 137 +++++++++++++++++++++++++++++++++++++--
>  PVE/QemuServer/Drive.pm  |  63 ++++++++++++++----
>  PVE/VZDump/QemuServer.pm |  43 ++++++++++--
>  5 files changed, 238 insertions(+), 24 deletions(-)
> 
>

applied, with a few trivial whitespace related cleanups, thanks!


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to