пт, 7 окт. 2022 г. в 19:50, Frank Schilder <fr...@dtu.dk>:
> For the interested future reader, we have subdivided 400G high-performance 
> SSDs into 4x100G OSDs for our FS meta data pool. The increased concurrency 
> improves performance a lot. But yes, we are on the edge. OMAP+META is almost 
> 50%.

Please be careful with that. In the past, I had to help a customer who
ran out of disk space on small SSD partitions. This has happened
because MONs keep a history of all OSD and PG maps until at least the
clean state. So, during a prolonged semi-outage (when the cluster is
not healthy) they will slowly creep and accumulate and eat disk space
- and the problematic part is that this creepage is replicated to
OSDs.


-- 
Alexander E. Patrakov
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to