Hi,
We use write-around cache tier with libradosstriper-based clients. We faced 
with bug which causes performance degradation: 
http://tracker.ceph.com/issues/22528 . Especially if it is a lot of small 
objects - sizeof(1 striper chunk). Such objects will promote on every 
read/write lock:).
And it is very hard to benchmark cache tier.

Also, we have a little testing pool with rbd disks for vm's. It works better 
with cache tier on ssd's. But, there's no heavy i/o load.

It's better to benchmark cache tier for your special case and choose cache mode 
based on benchmark results.

06.03.2018, 02:28, "Budai Laszlo" <laszlo.bu...@gmail.com>:
> Dear all,
>
> I have some questions about cache tier in ceph:
>
> 1. Can someone share experiences with cache tiering? What are the sensitive 
> things to pay attention regarding the cache tier? Can one use the same ssd 
> for both cache and
> 2. Is cache tiering supported with bluestore? Any advices for using cache 
> tier with Bluestore?
>
> Kind regards,
> Laszlo
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Regards,
Aleksei Zakharov

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to