On Mon, Aug 25, 2014 at 11:43:38AM -0600, Chris Friesen wrote:
> On 08/25/2014 09:12 AM, Chris Friesen wrote:
> 
> >I set up another test, checking the inflight value every second.
> >
> >Running just "dd if=/dev/zero of=testfile2 bs=1M count=700
> >oflag=nocache&" gave a bit over 100 inflight requests.
> >
> >If I simultaneously run "dd if=testfile of=/dev/null bs=1M count=700
> >oflag=nocache&" then then number of inflight write requests peaks at 176.
> >
> >I should point out that the above numbers are with qemu 1.7.0, with a
> >ceph storage backend.  qemu is started with
> >
> >-drive file=rbd:cinder-volumes/.........
> 
> From a stacktrace that I added it looks like the writes are coming in via
> virtio_blk_handle_output().
> 
> Looking at virtio_blk_device_init() I see it calling virtio_add_queue(vdev,
> 128, virtio_blk_handle_output);
> 
> I wondered if that 128 had anything to do with the number of inflight
> requests, so I tried recompiling with 16 instead. I still saw the number of
> inflight requests go up to 178 and the guest took a kernel panic in
> virtqueue_add_buf() so that wasn't very successful. :)
> 
> Following the code path in virtio_blk_handle_write() it looks like it will
> bundle up to 32 writes into a single large iovec-based "multiwrite"
> operation.  But from there on down I don't see a limit on how many writes
> can be outstanding at any one time.  Still checking the code further up the
> virtio call chain.

Yes, virtio-blk does write merging.  Since QEMU 2.4.0 it also does read
request merging.

I suggest using the fio benchmark tool with the following job file to
try submitting 256 I/O requests at the same time:

[randread]
blocksize=4k
filename=/dev/vda
rw=randread
direct=1
ioengine=libaio
iodepth=256
runtime=120

Reply via email to