Some additional information gathered from our monitoring:
It seems fast_read does indeed become active immediately, but I do not 
understand the effect. 

With fast_read = 0, we see:
~ 5.2 GB/s total outgoing traffic from all 6 OSD hosts
~ 2.3 GB/s total incoming traffic to all 6 OSD hosts

With fast_read = 1, we see:
~ 5.1 GB/s total outgoing traffic from all 6 OSD hosts
~ 3   GB/s total incoming traffic to all 6 OSD hosts

I would have expected exactly the contrary to happen... 

Cheers,
        Oliver

Am 26.02.2018 um 12:51 schrieb Oliver Freyermuth:
> Dear Cephalopodians,
> 
> in the few remaining days when we can still play at our will with parameters,
> we just now tried to set:
> ceph osd pool set cephfs_data fast_read 1
> but did not notice any effect on sequential, large file read throughput on 
> our k=4 m=2 EC pool. 
> 
> Should this become active immediately? Or do OSDs need a restart first? 
> Is the option already deemed safe? 
> 
> Or is it just that we should not expect any change on throughput, since our 
> system (for large sequential reads)
> is purely limited by the IPoIB throughput, and the shards are nevertheless 
> requested by the primary OSD? 
> So the gain would not be in throughput, but the reply to the client would be 
> slightly faster (before all shards have arrived)? 
> Then this option would be mainly of interest if the disk IO was congested 
> (which does not happen for us as of yet)
> and not help so much if the system is limited by network bandwidth. 
> 
> Cheers,
>       Oliver
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to