Hello,
MDS process crashed suddently. After trying to restart it, it failed to replay
journal and started to restart continually.
Just to summarize, here is what happened :
1/ The cluster is up and running with 3 nodes (mon and mds in the same nodes)
and 3 OSD.
2/ After a few days, 2 (standby
Hello,
Same issue with another cluster.
Here is the coredump tag 41659448-bc1b-4f8a-b563-d1599e84c0ab
Thanks,
Carl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
After trying to restart the mds master, it also failed. Now the cluster state
is :
# ceph status
cluster:
id: dd024fe1-4996-4fed-ba57-03090e53724d
health: HEALTH_WARN
1 filesystem is degraded
insufficient standby MDS daemons available
29 daemons have recently crashed
services:
m
Hi,
I made a fresh install of Ceph Octopus 15.2.3 recently.
And after a few days, the 2 standby MDS suddenly crashed with segmentation
fault error.
I try to restart it but it does not start.
Here is the error :
-20> 2020-07-17T13:50:27.888+ 7fc8c6c51700 10 monclient: _renew_subs
-19> 2020