I have been reading a lot of information about cache-tiers, and I wanted to
know how best to go about adding the cache-tier to a production environment.

 

Our current setup is Infernalis (9.2.1) 4 nodes with 8 x 4TB SATA drives per
node and 2 x 400GB NVMe acting as journals (1:4 ratio). There is a bunch of
spare space on the NVMe's so we would like to partition that and make them
OSDs for a cache-tier. Each NVMe should have about 200GB of space available
on them giving us plenty of cache space (8 x 200GB), placing the journals on
the NVMe since they have more than enough bandwidth.

 

Our primary usage for Ceph at this time is powering RBD block storage for an
OpenStack cluster. The vast majority of our users use the system mainly for
long term storage (store and hold data) but we do get some "hotspots" from
time to time and we want to help smooth those out a little bit.

 

I have read this page:
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/ and believe
that I have a handle on most of that.

 

I recall some additional information regarding permissions for block device
access (making sure that your cephx permissions allow access to the
cache-tier pool).

 

Our plan is:

 

-          partition the NVMe's and create the OSDs manually with a 0 weight


-          create our new cache pool, and adjust the crushmap to place the
cache pool on these OSDs

-          make sure permissions and settings are taken care of (making sure
our cephx volumes user has rwx on the cache-tier pool)

-          add the cache-tier to our volumes pool

-          ???

-          Profit!

 

Is there anything we might be missing here? Are there any other issues that
we might need to be aware of? I seem to recall some discussion on the list
with regard to settings that were required to make caching work correctly,
but my memory seems to indicate that these changes were already added to the
page listed above. Is that assumption correct?

 

Tom Walsh

https://expresshosting.net/

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to