Re: [ceph-users] MDSs report damaged metadata

2019-08-22 Thread Robert LeBlanc
We just had metadata damage show up on our Jewel cluster. I tried a few things like renaming directories and scanning, but the damage would just show up again in less than 24 hours. I finally just copied the directories with the damage to a tmp location on CephFS, then swapped it with the damaged o

Re: [ceph-users] MDSs report damaged metadata

2019-08-19 Thread Lars Täuber
Hi there! Does anyone else have an idea what I could do to get rid of this error? BTW: it is the third time that the pg 20.0 is gone inconsistent. This is a pg from the metadata pool (cephfs). May this be related anyhow? # ceph health detail HEALTH_ERR 1 MDSs report damaged metadata; 1 scrub err

Re: [ceph-users] MDSs report damaged metadata - "return_code": -116

2019-08-19 Thread Lars Täuber
Hi Paul, thanks for the hint. I did a recursive scrub from "/". The log says there where some inodes with bad backtraces repaired. But the error remains. May this have something to do with a deleted file? Or a file within a snapshot? The path told by # ceph tell mds.mds3 damage ls 2019-08-19 1

Re: [ceph-users] MDSs report damaged metadata - "return_code": -116

2019-08-19 Thread Paul Emmerich
Hi, that error just says that the path is wrong. I unfortunately don't know the correct way to instruct it to scrub a stray path off the top of my head; you can always run a recursive scrub on / to go over everything, though Paul -- Paul Emmerich Looking for help with your Ceph cluster? Conta

Re: [ceph-users] MDSs report damaged metadata - "return_code": -116

2019-08-19 Thread Lars Täuber
Hi all! Where can I look up what the error number means? Or did I something wrong in my command line? Thanks in advance, Lars Fri, 16 Aug 2019 13:31:38 +0200 Lars Täuber ==> Paul Emmerich : > Hi Paul, > > thank you for your help. But I get the following error: > > # ceph tell mds.mds3 scrub

Re: [ceph-users] MDSs report damaged metadata

2019-08-16 Thread Lars Täuber
Hi Paul, thank you for your help. But I get the following error: # ceph tell mds.mds3 scrub start "~mds0/stray7/15161f7/dovecot.index.backup" repair 2019-08-16 13:29:40.208 7f7e927fc700 0 client.881878 ms_handle_reset on v2:192.168.16.23:6800/176704036 2019-08-16 13:29:40.240 7f7e937fe700

Re: [ceph-users] MDSs report damaged metadata

2019-08-16 Thread Paul Emmerich
Hi, damage_type backtrace is rather harmless and can indeed be repaired with the repair command, but it's called scrub_path. Also you need to pass the name and not the rank of the MDS as id, it should be # (on the server where the MDS is actually running) ceph daemon mds.mds3 scrub_path .

[ceph-users] MDSs report damaged metadata

2019-08-15 Thread Lars Täuber
Hi all! The mds of our ceph cluster produces a health_err state. It is a nautilus 14.2.2 on debian buster installed from the repo made by croit.io with osds on bluestore. The symptom: # ceph -s cluster: health: HEALTH_ERR 1 MDSs report damaged metadata services: mon: 3 d