Again, thank you very much for your help!

The container is not there any more, but I discovered that the "old" mon
data still exists. I have the same situation for two mons I removed at
the same time:

$ monmaptool --print monmap1
monmaptool: monmap file monmap1
epoch 29
fsid 6d0d4ed4-0052-4eb9-9d9d-e6872ba7ee96
last_changed 2025-04-10T14:16:21.203171+0200
created 2021-02-26T14:02:29.522695+0100
min_mon_release 19 (squid)
election_strategy: 1
0: [v2:10.127.239.2:3300/0,v1:10.127.239.2:6789/0] mon.ceph2-02
1: [v2:10.127.239.61:3300/0,v1:10.127.239.61:6789/0] mon.rgw2-04
2: [v2:10.127.239.63:3300/0,v1:10.127.239.63:6789/0] mon.rgw2-06
3: [v2:10.127.239.62:3300/0,v1:10.127.239.62:6789/0] mon.rgw2-05

$ monmaptool --print monmap2
monmaptool: monmap file monmap2
epoch 30
fsid 6d0d4ed4-0052-4eb9-9d9d-e6872ba7ee96
last_changed 2025-04-10T14:16:43.216713+0200
created 2021-02-26T14:02:29.522695+0100
min_mon_release 19 (unknown)
election_strategy: 1
0: [v2:10.127.239.61:3300/0,v1:10.127.239.61:6789/0] mon.rgw2-04
1: [v2:10.127.239.63:3300/0,v1:10.127.239.63:6789/0] mon.rgw2-06
2: [v2:10.127.239.62:3300/0,v1:10.127.239.62:6789/0] mon.rgw2-05

Would it be feasible to move the data from node1 (which still contains
node2 as mon) to node2, or would that just result in even more mess?


On 2025-04-10 19:57, Eugen Block wrote:
It can work, but it might be necessary to modify the monmap first,
since it's complaining that it has been removed from it. Are you
familiar with the monmap-tool
(https://docs.ceph.com/en/latest/man/8/monmaptool/)?

The procedure is similar to changing a monitor's IP address the "messy
way"
(https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-advanced-method).


I also wrote a blog post how to do it with cephadm:
https://heiterbiswolkig.blogs.nde.ag/2020/12/18/cephadm-changing-a-monitors-ip-address/


But before changing anything, I'd inspect first what the current
status is. You can get the current monmap from  within the mon
container (is it still there?):

cephadm shell --name mon.<mon>
ceph-monstore-tool /var/lib/ceph/mon/<your_mon> get monmap -- --out
monmap
monmaptool --print monmap

You can paste the output here, if you want.

Zitat von Jonas Schwab <jonas.sch...@uni-wuerzburg.de>:

I realized, I have access to a data directory of a monitor I removed
just before the oopsie happened. Can I launch a ceph-mon from that? If I
try just to launch ceph-mon, it commits suicide:

2025-04-10T19:32:32.174+0200 7fec628c5e00 -1 mon.mon.ceph2-01@-1(???)
e29 not in monmap and have been in a quorum before; must have been
removed
2025-04-10T19:32:32.174+0200 7fec628c5e00 -1 mon.mon.ceph2-01@-1(???)
e29 commit suicide!
2025-04-10T19:32:32.174+0200 7fec628c5e00 -1 failed to initialize

On 2025-04-10 16:01, Jonas Schwab wrote:
Hello everyone,

I believe I accidentally nuked all monitor of my cluster (please don't
ask how). Is there a way to recover from this desaster? I have a
cephadm
setup.

I am very grateful for all help!

Best regards,
Jonas Schwab
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

--
Jonas Schwab

Research Data Management, Cluster of Excellence ct.qmat
https://data.ctqmat.de | datamanagement.ct.q...@listserv.dfn.de
Email: jonas.sch...@uni-wuerzburg.de
Tel: +49 931 31-84460
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to