Hi,

Am 6/30/25 um 15:34 schrieb Bailey Allison:
I'm pretty sure the reason is due to the damaged MDS daemon. If you are able to clear that up it should allow the filesystem to come back up. I seen something like this a few months ago. We were just able to mark the mds as "repaired" and haven't seen any issue since, however I would discourage doing that without further investigation into the source of the damaged daemon.

The command "ceph tell mds.storage_cluster:0 damage ls" does not work as there is currently no active MDS in rank 0 for this filesystem.

So I do not know why the MDS is damaged. The cluster also does not report damaged metadata but only the damaged MDS.

Restarting the daemon does not change the situation. After a restart the MDS gets told to become a standby by the MONs. AFAIK an MDS does not have any persistent data in its installation directory.

With marking the MDS as repaired you mean the command "ceph mds repaired storage_cluster:0", right?

Regards
--
Robert Sander
Linux Consultant

Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: +49 30 405051 - 0
Fax: +49 30 405051 - 19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to