Am 05.06.25 um 13:02 schrieb Christoph Heiss:
> Tested the series by setting up a iSCSI target using targetcli(d) (on a
> separate PVE 8.4 system as a base, due to ZFS goodies) and then adding a
> ZFS-over-iSCSI storage using the LIO provider to a test cluster.
> 
> Confirmed that
> 
> - `zfs-base-path` is correctly detected when adding the storage
> 
> - the iSCSI storage is seen correctly after setting up and that VM disks
>   can be (live-)migrated to the ZFS-over-iSCSI storage w/o problems.
> 
> One small comment inline, just a typo in the API description.
> 
> Please consider the series in any case
> 
> Tested-by: Christoph Heiss <c.he...@proxmox.com>

Thank you for testing! Superseded by v3 with the typo fixed:
https://lore.proxmox.com/pve-devel/20250605111109.52712-1-f.eb...@proxmox.com/

> One unrelated thing I noticed during testing, but wanted to note for
> reference:
> 
> When one hits the error due to a bad `zfs-base-path` (e.g. as currently
> happens):
> 
>   `TASK ERROR: storage migration failed: Could not open 
> /dev/<poolname>/vm-100-disk-0`
> 
> the target zvol isn't cleaned up, e.g. the above would result in
> `<poolname>/vm-100-disk-0` still being present on the remote zpool.
> 
> Fortunately this doesn't really break anything, as the next available
> disk number (in this case, `vm-100-disk-1`), is chosen automatically
> anyway when creating a new disk.

There actually is already error handling for freeing up allocated disks
in this context. But the storage plugin itself already fails during
allocation, so the new volume ID is never returned as a result, so
qemu-server doesn't know about it. I'll send a patch to improve cleanup
handling inside the plugin itself.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to