Hi everyone,

I recalled the figure of 120GB and sought to verify its source::

- It appears to originate from the RHCS 7 documentation [1]: "Create a
dedicated Storage Partition Size for /var/lib/ceph with a minimum size of
120 GB; 240 GB is recommended. If a dedicated partition is not feasible,
ensure that /var is on a dedicated partition with at least the
above-mentioned free space."
- The RHCS 8 + IBM Storage Ceph documentation states "10 GB per
mon-container, 50 GB Recommended" [2] which appears quite low and risky to
me.
- The Upstream doc says "100 GB per daemon, SSD is recommended" [3].

This highlights an inconsistency.

I've seen DBs growing up to 60GB during long step recovery on 1000+ OSDs
clusters (see [4] for details). So I think 120GB+ is still a good number.

Best regards,
Frédéric.

[1]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html-single/hardware_guide/index
[2]
https://www.ibm.com/docs/en/storage-ceph/8.1.0?topic=hardware-minimum-recommendations-containerized-ceph
[3] https://docs.ceph.com/en/latest/start/hardware-recommendations/
[4]
https://docs.ceph.com/en/latest/rados/operations/health-checks/#mon-disk-big

--
Frédéric Nass
Ceph Ambassador France | Senior Ceph Engineer @ CLYSO
Check our online Config Diff Tool, it's great!
https://analyzer.clyso.com/#/analyzer/config-diff
https://clyso.com | [email protected]


Le sam. 15 nov. 2025 à 22:54, Anthony D'Atri <[email protected]> a écrit :

> That's a function of how big the cluster is, and how much churn there is.
> For production size clusters I would allot at least 50GB, but it sounds
> like you don't have one.  I suspect that OSD and mon RocksDBs will burn an
> SD card in short order. ymmv.  1.7 GB seems iffy, with all the other stuff
> that a system might write, including temporary usage during DB compactions.
>
> > On Nov 15, 2025, at 4:10 PM, filip Mutterer <[email protected]> wrote:
> >
> > Hi!
> > its about my Lab setup. As I didn't have a thought about the space
> requirements for the root file system I want to know if there is a rule by
> thumb how much space a cpeh mon might need? rightnow I only have the
> "whoami" container running in rook-ceph so it does consume nearly no space
> in the OSD. But how far can I go with only 1.7 GB left on my Raspberry Pi's
> sdcard?
> >
> > Greetings
> > filip
> >
> > _______________________________________________
> > ceph-users mailing list -- [email protected]
> > To unsubscribe send an email to [email protected]
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to