Hi,

did you only run the recover_dentries command or did you follow the entire procedure from your first message?

If the cluster reports a healthy status, I assume that all is good.

Zitat von b...@nocloud.ch:

I think i was luky...

```sh
[root@ceph1 ~]# cephfs-journal-tool --rank=cephfs:0 event recover_dentries summary
Events by type:
  OPEN: 34407
  PURGED: 2
  SEGMENT: 125
  SESSION: 15
  SUBTREEMAP: 9
  UPDATE: 75836
Errors: 0
```

Do i interpret this correctly, that `Errors: 0` means all journal events had been able to be recovered? According to that output i didn't dig deeper for a `cephfs-data-scan` and mds map reset.

After running the above procedure i followed with the following:

Mark as repaired:

```sh
ceph mds repaired cephfs:0
```

Allow clients:

```sh
ceph config rm mds mds_deny_all_reconnect
ceph fs set cephfs refuse_client_session false
```

Start mds scrub:

```sh
ceph tell mds.cephfs:0 scrub start / recursive
```

After that i got a error:

```
[ERR] MDS_DAMAGE: 1 MDSs report damaged metadata
    mds.cephfs.ceph1.yzqmuo(mds.0): Metadata damage detected
```

Repair metadata:

```sh
ceph tell mds.cephfs:0 scrub start / recursive,repair,force
```

Now everything seems fine.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to