Moving to Ceph is very slow when bs=1. Instead, use the biggest possible power of two <= 1024. At the moment our EFI image sizes are multiples of 1024, so just using 1024 wouldn't be a problem, but this feels more future-proof.
Signed-off-by: Fabian Ebner <f.eb...@proxmox.com> --- I did not see an way for 'qemu-img dd' to use a larger blocksize while still specifying the exact total size if it is not a multiple of the blocksize. PVE/QemuServer.pm | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index f401baf..e579cdf 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -6991,7 +6991,15 @@ sub clone_disk { # that is given by the OVMF_VARS.fd my $src_path = PVE::Storage::path($storecfg, $drive->{file}); my $dst_path = PVE::Storage::path($storecfg, $newvolid); - run_command(['qemu-img', 'dd', '-n', '-O', $dst_format, "bs=1", "count=$size", + + # Ceph doesn't like too small blocksize, see bug #3324 + my $bs = 1; + while ($bs < $size && $bs < 1024 && $size % $bs == 0) { + $bs *= 2; + } + my $count = $size / $bs; + + run_command(['qemu-img', 'dd', '-n', '-O', $dst_format, "bs=$bs", "count=$count", "if=$src_path", "of=$dst_path"]); } else { qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, $sparseinit); -- 2.20.1 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel