Good evening!

The following problem occurred.
There is a cluster ceph 16.2.10
The cluster was operating normally on Friday. Shut down cluster:
-Excluded all clients
Executed commands:
ceph osd set noout
ceph osd set nobackfill
ceph osd set norecover
ceph osd set norebalance
ceph osd set nodown
ceph osd set pause
Turned off the cluster, checked server maintenance.
Enabled cluster. He gathered himself, found all the nodes, and here the problem 
began. After all OSD went up and all pg became available, cephfs refused to 
start.
Now mds are in the replay status, and do not go to the ready status.
Previously, one of them was in the replay (laggy) status, but we executed 
command:  ceph config set mds mds_wipe_sessions true
After that, mds switched to the status of replays, the third in standby status 
started, and mds crashes with an error stopped.
But cephfs is still unavailable.
What else can we do?
The cluster is very large, almost 200 million files.


Best regards


A.Tsivinsky

e-mail: alexey.tsivin...@baikalelectronics.com
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to