Hi all,
We throttle (with qemu-kvm) rbd devices to 100 w/s and 100 r/s (and 80MB/s 
write and read).
With fio we cannot exceed 51.2MB/s sequential or random reads, no matter the 
reading block size. (But with large writes we can achieve 80MB/s). 

I just realised that the VM subsytem is probably splitting large reads into 512 
byte reads, following at least one of:

# cat /sys/block/vdb/queue/hw_sector_size
512
# cat /sys/block/vdb/queue/minimum_io_size
512
# cat /sys/block/vdb/queue/optimal_io_size
0

vdb is an RBD device coming over librbd, with rbd cache=true and mounted like 
this:

  /dev/vdb on /vicepa type xfs (rw)

Did anyone observe this before? 

Is there a kernel setting to stop splitting reads like that? or a way to change 
the io_sizes reported by RBD to the kernel).

(I found a similar thread on the lvm mailing list, but lvm shouldn’t be 
involved here.)

All components here are running latest dumpling. Client VM is running CentOS 
6.6.

Cheers, Dan
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to