I have certainly seen cases where the OMAPS have not stayed within the
RocksDB/WAL NVME space and have been going down to disk.
This was on a large cluster with a lot of objects but the disks that where
being used for the non-ec pool where seeing a lot more actual disk activity
than the other d
We are using Ceph Octopus version with a total disk size of 136 TB, configured
with two replicas. Currently, our usage is 57 TB, and the available size is 5.3
TB. An incident occurred yesterday where around 3 TB of data was deleted
automatically. Upon analysis, we couldn't find the reason for th
I resolved this problem; the issue stemmed from the scrubbing state taking too
much time to complete 270 OSDs, and then the scrubbing process increased
overtime.
Changing the osd_scrub_max_interval and osd_scrub_min_interval to 7 days and 14
days, respectively, resolved my problem
_
Hello,
First of all, thanks for reading my message. I set up a Ceph version 18.2.2
cluster with 4 nodes, everything went fine for a while, but after copying some
files, the storage showed a warning status and the following message :
"HEALTH_WARN: 1 MDSs are read only mds.PVE-CZ235007SH(mds.0):
The OP's number suggest IIRC like 120GB-ish for WAL+DB, though depending on
workload spillover could of course still be a thing.
>
> I have certainly seen cases where the OMAPS have not stayed within the
> RocksDB/WAL NVME space and have been going down to disk.
>
> This was on a large clust
Hi Nicolas,
This is a known issue and Venky is working on it, please see
https://tracker.ceph.com/issues/63259.
Thanks
- Xiubo
On 6/3/24 20:04, nbarb...@deltaonline.net wrote:
Hello,
First of all, thanks for reading my message. I set up a Ceph version 18.2.2 cluster with
4 nodes, everythin