Am 18.12.24 um 16:33 schrieb DERUMIER, Alexandre via pve-devel: >>> Am 18.12.24 um 15:20 schrieb Daniel Kral: >>>> - When exporting with "pvesm export ...", the volume has the same >> checksum as with "rbd export ..." with the size header prepended > >>> Well, I totally missed the existence of "rbd export" in my hurry to >>> get >>> this working. Seems to be about 1.5 times faster than mapping+dd from >>> some initial testing. Will use that in v3. > > Hi, fiona, rbd export|import, this is the way, like zfs send|receive. > (with snapshot support too, with export-diff|import-diff) >
Saw that in the man page, and yes, would be the way to go for an 'rbd' transport format with incremental support similar to 'zfs' :) > I't really fast because, if I remember, it's use big block size and is > able to do parallelism. (can be tunned with --rbd-concurrent- > management-ops <number> ) > We'll need to evaluate trade-off between speed and putting more load on the system/Ceph. After all, the disk move might not be the most important thing happening at that moment. > No related, but could it be possible to implement it, for simple > vm/template full cloning with source+target are both rbd ? It's really > faster with 'qemu-img convert' Hmm, we could shift offline copy of images to the storage layer (at least in some cases). We just need a version of storage_migrate() that goes to the local node instead of SSH to a different one. Could you please open a feature request for this? _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel