[ceph-users] Re: tuning for backup target cluster

2024-06-03 Thread Darren Soothill
I have certainly seen cases where the OMAPS have not stayed within the RocksDB/WAL NVME space and have been going down to disk. This was on a large cluster with a lot of objects but the disks that where being used for the non-ec pool where seeing a lot more actual disk activity than the other d

[ceph-users] Ceph data got missed

2024-06-03 Thread prabu . jawahar
We are using Ceph Octopus version with a total disk size of 136 TB, configured with two replicas. Currently, our usage is 57 TB, and the available size is 5.3 TB. An incident occurred yesterday where around 3 TB of data was deleted automatically. Upon analysis, we couldn't find the reason for th

[ceph-users] Re: About placement group scrubbing state

2024-06-03 Thread tranphong079
I resolved this problem; the issue stemmed from the scrubbing state taking too much time to complete 270 OSDs, and then the scrubbing process increased overtime. Changing the osd_scrub_max_interval and osd_scrub_min_interval to 7 days and 14 days, respectively, resolved my problem _

[ceph-users] Help needed please ! Filesystem became read-only !

2024-06-03 Thread nbarbier
Hello, First of all, thanks for reading my message. I set up a Ceph version 18.2.2 cluster with 4 nodes, everything went fine for a while, but after copying some files, the storage showed a warning status and the following message : "HEALTH_WARN: 1 MDSs are read only mds.PVE-CZ235007SH(mds.0):

[ceph-users] Re: tuning for backup target cluster

2024-06-03 Thread Anthony D'Atri
The OP's number suggest IIRC like 120GB-ish for WAL+DB, though depending on workload spillover could of course still be a thing. > > I have certainly seen cases where the OMAPS have not stayed within the > RocksDB/WAL NVME space and have been going down to disk. > > This was on a large clust

[ceph-users] Re: Help needed please ! Filesystem became read-only !

2024-06-03 Thread Xiubo Li
Hi Nicolas, This is a known issue and Venky is working on it, please see https://tracker.ceph.com/issues/63259. Thanks - Xiubo On 6/3/24 20:04, nbarb...@deltaonline.net wrote: Hello, First of all, thanks for reading my message. I set up a Ceph version 18.2.2 cluster with 4 nodes, everythin