[ceph-users] Re: SAS vs SATA for OSD - WAL+DB sizing.

2021-06-03 Thread Dave Hall
Mark, We are running a mix of RGW, RDB, and CephFS. Our CephFS is pretty big, but we're moving a lot of it to RGW. What prompted me to go looking for a guideline was a high frequency of Spillover warnings as our cluster filled up past the 50% mark. That was with 14.2.9, I think. I understand t

[ceph-users] Re: SAS vs SATA for OSD - WAL+DB sizing.

2021-06-03 Thread Dave Hall
Anthony, I had recently found a reference in the Ceph docs that indicated something like 40GB per TB for WAL+DB space. For a 12TB HDD that comes out to 480GB. If this is no longer the guideline I'd be glad to save a couple dollars. -Dave -- Dave Hall Binghamton University kdh...@binghamton.edu

[ceph-users] Re: SAS vs SATA for OSD - WAL+DB sizing.

2021-06-03 Thread Anthony D'Atri
In releases before … Pacific I think, there are certain discrete capacities that DB will actually utilize: the sum of RocksDB levels. Lots of discussion in the archives. AIUI in those releases, with a 500 GB BlueStore WAL+DB device, you’ll with default settings only actually use ~~300 GB most

[ceph-users] Re: SAS vs SATA for OSD - WAL+DB sizing.

2021-06-03 Thread Mark Nelson
FWIW, those guidelines try to be sort of a one-size-fits-all recommendation that may not apply to your situation.  Typically RBD has pretty low metadata overhead so you can get away with smaller DB partitions.  4% should easily be enough.  If you are running heavy RGW write workloads with small