Here firsts results, It's seem indeed that writeback is the default cache mode (no cache option defined in qemu command line).
bench: random write 4K buffered (no directio) ----------------------------------------------- #mkfs.xfs /dev/vdb #mount /dev/vdb /mnt/ (default xfs options) #fio --filename=/mnt/test1 --rw=randwrite --bs=4k --iodepth=40 --size=1000M --groupe_reporting --name=file1 --ioengine=libaio virtio (guest kernel 3.6-rc4), disk cache mode in guest ------------------------------------------------------------------------------------- no cache defined : cat /sys/block/vda/cache_type : write_back cache=none : cat /sys/block/vda/cache_type : : write_back cache=writeback : cat /sys/block/vda/cache_type : write_back cache=writethrough : cat /sys/block/vda/cache_type : write_through cache=directsync : cat /sys/block/vda/cache_type : write_through results: local raid controller (512MB CACHE with battery): ------------------------------------------------- default: 15000 cache=none : 8370 cache=writeback : 15000 nexenta - libiscsi: -------------------- default: 4000 (some hangs) cache=none : 14000 cache=writeback : 5000 (some hangs) nexenta - scsi-host ------------------ default: 6000 (some hangs) cache=none : 11000 cache=writeback : 6000 (some hangs) rbd ------------------ default: 6000 cache=none : 6000 cache=writeback : 7000 sheepdog ------------------ default: 300 cache=none : 300 cache=writeback : 300 So for local storage, writeback seem really faster. for iscsi, writeback seem slower (I have some hang during 5-6 seconds, maybe this is host flushing...?..) for rbd, I don't have big difference. (I'll try to tune the "rbd cache max dirty age" parameter later for sheepdog, difficult to compare, It's a small cluster.. I'll do more test with nfs tomorrow Regards, Alexandre _______________________________________________ pve-devel mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
