Also carefully read the word of caution section on David's link (which is absent in the jewel version of the docs), a cache tier in front of an ersure coded data pool for RBD is almost always a bad idea.
I would say that statement is incorrect if using Bluestore. If using Bluestore, small writes are supported on erasure coded pools and so that “always a bad idea” should be read as “can be a bad idea” Nick Caspar Met vriendelijke groet, Caspar Smit Systemengineer SuperNAS Dorsvlegelstraat 13 1445 PA Purmerend t: (+31) 299 410 414 e: caspars...@supernas.eu <mailto:caspars...@supernas.eu> w: www.supernas.eu <http://www.supernas.eu> 2017-12-26 23:12 GMT+01:00 David Turner <drakonst...@gmail.com <mailto:drakonst...@gmail.com> >: Please use the version of the docs for your installed version of ceph. Now the Jewel in your URL and the Luminous in mine. In Luminous you no longer need a cache tier to use EC with RBDs. http://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/ On Tue, Dec 26, 2017, 4:21 PM Karun Josy <karunjo...@gmail.com <mailto:karunjo...@gmail.com> > wrote: Hi, We are using Erasure coded pools in a ceph cluster for RBD images. Ceph version is 12.2.2 Luminous. ----- http://docs.ceph.com/docs/jewel/rados/operations/cache-tiering/ ----- Here it says we can use a Cache tiering infront of ec pools. To use erasure code with RBD we have a replicated pool to store metadata and ecpool as data pool . Is it possible to setup cache tiering since there is already a replicated pool that is being used ? Karun Josy _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com