Am 08.01.2013 12:08, schrieb Liu Yuan: > On 01/08/2013 06:51 PM, Kevin Wolf wrote: >> Am 08.01.2013 11:39, schrieb Liu Yuan: >>> This also explains why >>> I saw a regression about write performance: Old QEMU can issue multiple >>> write requests in one go, but now the requests are sent one by one (even >>> with cache=writeback set), which makes Sheepdog write performance drop a >>> lot. Is it possible to issue multiple requests in one go as old QEMU does? >> >> Huh? We didn't change anything to that respect, or at least not that I'm >> aware of. qemu always only had single-request bdrv_co_writev, so if >> anything that batching must have happened inside Sheepdog code? Do you >> know what makes it not batch requests any more? >> > > QEMU v1.1.x works well with batched write requests. Sheepdog block > driver doesn't do batching trick as far as I know, just send request as > it is feed. There isn't noticeable changes between v1.1.x and current > master regard to Sheepdog.c. > > To detail the different behavior, from Sheepdog daemon which receives > the requests from QEMU: > old: can receive multiple many requests at the virtually the same time > and handle them concurrently > now: only receive one request, handle it, reply and get another. > > So I think the problem is, current QEMU will wait for write response > before sending another request.
I can't see a reason why it would do that. Can you bisect this? >>> It seems it is hard to restore into old semantics of cache flags due to >>> new design of QEMU block layer. So will you accept that adding a 'flags' >>> into BlockDriverState which carry the 'cache flags' from user to keep >>> backward compatibility? >> >> No, going back to the old behaviour would break guest-toggled WCE. >> > > Guest-toggled WCE only works with IDE and seems that virtio-blk doesn't > support it, no? And I think there are huge virtio-blk users. It works with virtio-blk and SCSI as well. > I didn't meant to break WCE. What I meant is to allow backward > compatibility. For e.g, Sheepdog driver can make use of this dedicated > cache flags to implement its own cache control and doesn't affect other > drivers at all. How would you do it? With a WCE that changes during runtime the idea of a flag that is passed to bdrv_open() and stays valid as long as the BlockDriverState exists doesn't match reality any more. Kevin