Hello,

Based on other discussions in this list I have concluded that I need to add
NVMe to my OSD nodes and expand the NVMe (DB/WAL) for each OSD.  Is there a
way to do this without destroying and rebuilding each OSD (after
safe removal from the cluster, of course)?  Is there a way to use
ceph-bluestore-tool for this?  Is it as simple as lvextend?

Why more NVMe?  Frequent DB spillovers, and the recommendation that the
NVMe should be 40GB for every TB of HDD.  When I did my initial setup I
thought that 124GB of NVMe for a 12TB HDD would be sufficient, but by the
above metric it should be more like 480GB of NVMe.

Thanks.

-Dave

--
Dave Hall
Binghamton University
kdh...@binghamton.edu
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to