Hi,
just for the archives:
On Tue, 5 Mar 2024, Anthony D'Atri wrote:
* Try applying the settings to global so that mons/mgrs get them.
Setting osd_deep_scrub_interval at global instead at osd immediately turns
health to OK and removes the false warning from PGs not scrubbed in time.
HTH,
Hi,
On Wed, 8 Nov 2023, Sascha Lucas wrote:
On Tue, 7 Nov 2023, Harry G Coin wrote:
"/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 482, in
is_partition
/usr/bin/docker: stderr return self.blkid_api['TYPE'] == 'part'
/usr/bin/
Hi,
On Tue, 7 Nov 2023, Harry G Coin wrote:
These repeat for every host, only after upgrading from prev release Quincy to
17.2.7. As a result, the cluster is always warned, never indicates healthy.
I'm hitting this error, too.
"/usr/lib/python3.6/site-packages/ceph_volume/util/device.py",
Hi Venky,
On Wed, 14 Dec 2022, Venky Shankar wrote:
On Tue, Dec 13, 2022 at 6:43 PM Sascha Lucas wrote:
Just an update: "scrub / recursive,repair" does not uncover additional
errors. But also does not fix the single dirfrag error.
File system scrub does not clear entries from
Hi William,
On Mon, 12 Dec 2022, William Edwards wrote:
Op 12 dec. 2022 om 22:47 heeft Sascha Lucas het volgende
geschreven:
Ceph "servers" like MONs, OSDs, MDSs etc. are all
17.2.5/cephadm/podman. The filesystem kernel clients are co-located on
the same hosts running th
Hi,
On Mon, 12 Dec 2022, Sascha Lucas wrote:
On Mon, 12 Dec 2022, Gregory Farnum wrote:
Yes, we’d very much like to understand this. What versions of the server
and kernel client are you using? What platform stack — I see it looks like
you are using CephFS through the volumes interface? The
Hi Greg,
On Mon, 12 Dec 2022, Gregory Farnum wrote:
On Mon, Dec 12, 2022 at 12:10 PM Sascha Lucas wrote:
A follow-up of [2] also mentioned having random meta-data corruption: "We
have 4 clusters (all running same version) and have experienced meta-data
corruption on the majority of th
Hi Dhairya,
On Mon, 12 Dec 2022, Dhairya Parmar wrote:
You might want to look at [1] for this, also I found a relevant thread [2]
that could be helpful.
Thanks a lot. I already found [1,2], too. But I did not considered it,
because I felt not having a "disaster"? Nothing seems broken nor cr
Hi,
without any outage/disaster cephFS (17.2.5/cephadm) reports damaged
metadata:
[root@ceph106 ~]# zcat
/var/log/ceph/3cacfa58-55cf-11ed-abaf-5cba2c03dec0/ceph-mds.disklib.ceph106.kbzjbg.log-20221211.gz
2022-12-10T10:12:35.161+ 7fa46779d700 1 mds.disklib.ceph106.kbzjbg
Updating MDS map