> On 6 Jul 2018, at 17.55, Matthew Stroud <mattstr...@overstock.com> wrote:
> 
> We have changed the IO scheduler to NOOP, which seems to yield the best 
> results. However, I haven’t look into messing around with tuned. Let me play 
> with that and see if I get different results.
> 
> On 5 Jul 2018, at 16.51, Matthew Stroud <mattstr...@overstock.com 
> <mailto:mattstr...@overstock.com>> wrote:
>  
> Bump. I’m hoping I can get people more knowledgeable than me to take a look.
> We back some of our ceph clusters with SAN SSD disk, particularly VSP G/F and 
> Purestorage. I’m curious what are some settings we should look into modifying 
> to take advantage of our SAN arrays. We
> Trust that you already looked into tunning the scsi layer through a proper 
> tuned profile like maybe enterprise (nobarrier, io scheduler none/deadline 
> etc.) to push your array the most.
> 
> 
> had to manually set the class for the luns to SSD class which was a big 
> improvement. However we still see situations where we get slow requests and 
> the underlying disks and network are underutilized.
Also beware that you are not saturating your CPU/SAN network in such periods as 
CPU are needed to push IOps to your SAN array.

>  
> More info about our setup. We are running centos 7 with Luminous as our ceph 
> release. We have 4 osd nodes that have 5x2TB disks each and they are setup as 
> bluestore. Our ceph.conf is attached with some information removed for 
> security reasons.
>  
> Thanks ahead of time.
>  
> Thanks,
> Matthew Stroud
>  
>  
> 
> CONFIDENTIALITY NOTICE: This message is intended only for the use and review 
> of the individual or entity to which it is addressed and may contain 
> information that is privileged and confidential. If the reader of this 
> message is not the intended recipient, or the employee or agent responsible 
> for delivering the message solely to the intended recipient, you are hereby 
> notified that any dissemination, distribution or copying of this 
> communication is strictly prohibited. If you have received this communication 
> in error, please notify sender immediately by telephone or return email. 
> Thank you.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> 
> 
> 
> 
> CONFIDENTIALITY NOTICE: This message is intended only for the use and review 
> of the individual or entity to which it is addressed and may contain 
> information that is privileged and confidential. If the reader of this 
> message is not the intended recipient, or the employee or agent responsible 
> for delivering the message solely to the intended recipient, you are hereby 
> notified that any dissemination, distribution or copying of this 
> communication is strictly prohibited. If you have received this communication 
> in error, please notify sender immediately by telephone or return email. 
> Thank you.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to