Hi,

Is there anyone in the community has experience of using "bcache" as backend of 
Ceph?
Nowadays, maybe most Ceph solution are based on full-SSD or full-HDD as backend 
data disks. So in order
to balance the cost and performance/capacity, we are trying the hybrid solution 
with "bcache". It utilize the SSD as cache
of the HDD to improve the performance(especially for random IO) and also easy 
to implement as it is already included in linux kernel.

https://en.wikipedia.org/wiki/Bcache
https://github.com/torvalds/linux/tree/master/drivers/md/bcache

Now I have 2 queries below and hope the storage expert in this community can 
kindly share some suggestions:
1. Is there any best practise of bcache tuning for using in ceph?
- Currently we enable the "writeback" mode of bcache and leave the other 
setting as default
- bcache has a writeback throttling algrithm to control the rate of SSD cache-> 
HDD writeback


2. Since we are running a Ceph cluster based on a storage backend with a 
"cache" layer, how to design a benchmark scenario that can get a 
stable/predictable IOPS result?
- We use the ceph as backend of block storage for openstack cinder service. Our 
benchmark is running in openstack guest VMs against the attached volume(block 
storage) in ceph
- In our test, we noticed that the read/write IOPS highly depends on the bcache 
cache hit rate and the amount of data to be write compared with "dirty_data" 
threshold specified in the bcache configuration
    1) If we run read test for several round repeatedly, as more and more data 
can be cached in the SSD layer, the IOPS result will gradually increase due to 
increasing cache hit rate
    2) If we run write test with large amount of data, when the size of 
dirty_data in the SSD cache layer reach the threshold, it will began to 
writeback/flush to the backend HDD, and then the incoming IOPS will drop down 
or fluctuate

- I suppose besides bcache, Ceph has other solution with "cache tier/layer", 
how to benchmark under this kind of design? Because if we can not find a 
scenario to get a stable benchmark IOPS result, we can not evaluate the impact 
of configuration/code change of ceph on the ceph performance (sometimes IOPS 
result increase/drop may only due to bcache behavior)

Thank you very much and looking forward to your expertise suggestion!
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to