> 
> This is an expected result, and it is not specific to Ceph. Any
> storage that consists of multiple disks will produce a performance
> gain over a single disk only if the workload allows for concurrent use
> of these disks - which is not the case with your 4K benchmark due to
> the de-facto missing readahead.

I don't think that affects writes.


> To allow for faster linear reads and writes, please create a file,
> /etc/udev/rules.d/80-rbd.rules, with the following contents (assuming
> that the VM sees the RBD as /dev/sda):

I think the OP didn't say that the client is a VM.  It might be, it might be a 
KRBD mount, it might be through CSI, we don't know.

I asked the OP privately if it's a VM because the first thing I check in this 
situation is libvirt / librbd throttling.  It sounds a *lot* like saturating an 
IOPS client throttle, something I've seen time and again in virtualization 
scenarios.

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to