Hi Cephers,

Recently, a split-brain occurred in my cluster and although it has recovered and is now healthy, when attempting to delete an empty directory in CephFS, the MDS server logs:

[ERR] dir 0x100215b86b3.011* object missing on disk; some files may be lost ('[path to dir]')

After this, the cluster enters in an error state with the message:

[ERR] overall HEALTH_ERR 1 MDSs report damaged metadata

I have already executed:

$ceph tell mds.cephfs:0 scrub start [path to dir] recursive,repair,force

As a response, I received information that some inodes, including the inode of the mentioned directory, were repaired and the cluster returned to a healthy state.

[ERR] scrub: inode wrongly marked free: 0x100161edd3b
[ERR] inode table repaired for inode: 0x100161edd3b
[ERR] scrub: inode wrongly marked free: 0x1001c9b8246
[ERR] inode table repaired for inode: 0x1001c9b8246
.....
[ERR] scrub: inode wrongly marked free: 0x100017dc9a3
[ERR] inode table repaired for inode: 0x100017dc9a3
[ERR] scrub: inode wrongly marked free: 0x100113ef73a
[ERR] inode table repaired for inode: 0x100113ef73a

Part that informs that the directory inode was repaired

[WRN] bad backtrace on inode 0x100215b86b3([path to dir]), rewriting it
[ERR] scrub: inode wrongly marked free: 0x100215b86b3
[ERR] inode table repaired for inode: 0x100215b86b3
[INF] Scrub repaired inode 0x100215b86b3
[INF] Cluster is now healthy
[INF] overall HEALTH_OK

If I try to access the same directory again, the MDS server logs:

[ERR] dir 0x100215b86b3.011* object missing on disk; Some files may be lost ('[path to dir]')

Since the directory is empty, I don't mind losing it.
How do I remove this information from MDS? Or what would be the recommended action?

Ceph version: ceph version 17.2.8 (f817ceb7f187defb1d021d6328fa833eb8e943b3) quincy (stable)

Thanks in advance,
Leandro Ferrari
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to