Did a high-level test between a PVE+Ceph cluster and a single PVE node
with the remote-migration of a Windows Server 2022 VM with EFI & TPM disks.
Ceph RBD -> remote LVM thin
LVM thin -> remote Ceph RBD
Worked in both directions, and the VM booted up as expected after each
migration.
One thing I ran into, only tangentially related to this series, is that
we don't support the 'raw+size' option for ZFS. Maybe we can get it
working on ZFS at least for VM disk images (zvol)?
Maybe it might also be time to consider if we want to handle CT volumes
differently on ZFS in the long term (file based dataset). In all other
storage options we have a block dev or raw file that we loop mount into
the CT. Aligning this with ZFS would probably simplify things quite a bit.
With the above mentioned tests, partially:
Tested-By: Aaron Lauterer <a.laute...@proxmox.com>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel