On 01/09/2013 06:46 PM, Liu Yuan wrote: >> 1) how slower is QEMU's emulated-writethrough mode for writes, due to >> > the extra requests? >> > > I'll collect some numbers on it. >
Okay I got some nubmers. I run three sheep daemon on the same host to emulate a 3 nodes cluster, Sheepdog image has 3 copies and I put Sheepdog client cache and Sheepdog backend storage on a tmpfs. Guest and Host are all Linux 3.7. I start QEMU by following command: $ qemu-system-x86_64 --enable-kvm -drive file=~/images/test1,if=virtio,cache=writeback -smp 2 -cpu host -m 1024 -drive file=sheepdog:test,if=virtio,cache=writeback I run 5 times 'dd if=/dev/urandom of=/dev/vdb bs=1M count=100 oflag=direct' and get the average nubmer: emulated (write + flush) old impl (single write) 13.3 M/s 13.7 M/s boost percentage: (13.7 - 13.3)/13.3 = 3%. If boost number is not big, but if we run QEMU and Sheep daemon on the separate boxes, we'll expect of bigger boost because the overhead of extra 'flush' request is increased. Besides performance, I think backward compatibility is more important: 1 if we run a old kernel host (quite possible for a long running server) which doesn't support WCE, then we will never have a chance to choose writethrough cache for guest OS against new QEMU (most users tend to update user space tools to exclude bugs) 2 The upper layer software which relies on the 'cache=xxx' to choose cache mode will fail its assumption against new QEMU. and my proposal (adding another field to BlockDriverState to allow self-interpretation cache flag) can work well with current block layer: Sheepdog driver behavior: cache flags WCE toggled and resulting behavior writethrough writethrough writeback writetrhough (writeback + flush) We can see that the writeback behavior for Guest-WCE toggling is the same as expected. The difference is that if we set cache=writethrough, guest can't change it via WCE toggling. Thanks, Yuan