Oops, I was off by a factor of 1000 in my original subject. We actually have 4M and 8M reads being split into 100 512kB reads per second. So perhaps these are limiting:
# cat /sys/block/vdb/queue/max_sectors_kb 512 # cat /sys/block/vdb/queue/read_ahead_kb 512 Questions below remain. Cheers, Dan On 27 Nov 2014 18:26, Dan Van Der Ster <daniel.vanders...@cern.ch> wrote: Hi all, We throttle (with qemu-kvm) rbd devices to 100 w/s and 100 r/s (and 80MB/s write and read). With fio we cannot exceed 51.2MB/s sequential or random reads, no matter the reading block size. (But with large writes we can achieve 80MB/s). I just realised that the VM subsytem is probably splitting large reads into 512 byte reads, following at least one of: # cat /sys/block/vdb/queue/hw_sector_size 512 # cat /sys/block/vdb/queue/minimum_io_size 512 # cat /sys/block/vdb/queue/optimal_io_size 0 vdb is an RBD device coming over librbd, with rbd cache=true and mounted like this: /dev/vdb on /vicepa type xfs (rw) Did anyone observe this before? Is there a kernel setting to stop splitting reads like that? or a way to change the io_sizes reported by RBD to the kernel). (I found a similar thread on the lvm mailing list, but lvm shouldn’t be involved here.) All components here are running latest dumpling. Client VM is running CentOS 6.6. Cheers, Dan _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com