Hi All,

I’m doing some testing on the new High/Low speed cache tiering flushing and I’m 
trying to get my head round the effect that changing these 2 settings have on 
the flushing speed.  When setting the osd_agent_max_ops to 1, I can get up to 
20% improvement before the osd_agent_max_high_ops value kicks in for high speed 
flushing. Which is great for bursty workloads.

As I understand it, these settings loosely effect the number of concurrent 
operations the cache pool OSD’s will flush down to the base pool.

I may have got completely the wrong idea in my head but I can’t understand how 
a static default setting will work with different cache/base ratios. For 
example if I had a relatively small number of very fast cache tier OSD’s (PCI-E 
SSD perhaps) and a much larger number of base tier OSD’s, would the value need 
to be increased to ensure sufficient utilisation of the base tier and make sure 
that the cache tier doesn’t fill up too fast?

Alternatively where the cache tier is based on spinning disks or where the base 
tier is not as comparatively large, this value may need to be reduced to stop 
it saturating the disks.

Any Thoughts?

Nick










_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to