--- Begin Message ---
I have done some tests with suballocated cluster and base image without
backing_file, indeed, I'm seeing a small performance degradation on big
1TB image.


with a 30GB image, I'm around 22000 iops 4k randwrite/randread  (with
or without l2_extended=on)

with a 1TB image, the result is different


fio –filename=/dev/sdb –direct=1 –rw=randwrite –bs=4k –iodepth=32
–ioengine=libaio –name=test

default l2-cache-size , extended_l2=off, cluster_size=64k : 2700 iops
default l2-cache-size , extended_l2=on, cluster_size=128k: 1500 iops


I have also play with qemu l2-cache-size option of drive (default value
is 1MB, and it's not enough for a 1TB image to keep all metadatas in
memory)

l2-cache-size=8MB , extended_l2=off, cluster_size=64k: 2900 iops
l2-cache-size=64MB , extended_l2=off, cluster_size=64k: 5100 iops
l2-cache-size=128MB , extended_l2=off, cluster_size=64k : 22000 iops

l2-cache-size=8MB , extended_l2=on, cluster_size=128k: 2000 iops
l2-cache-size=64MB , extended_l2=on, cluster_size=128k: 4500 iops
l2-cache-size=128MB , extended_l2=on, cluster_size=128k: 22000 iops


So no difference in needed memory, with or with extended_l2.

but the l2-cache-size tuning is really something we should add in
another patch I think ,for general performance with qcow2.




--- End Message ---
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to