Hi,
We have upgraded one ceph cluster from 17.2.7 to 18.2.0. Since then we are
having CephFS issues.
For example this morning:
“””
[root@naret-monitor01 ~]# ceph -s
cluster:
id: 63334166-d991-11eb-99de-40a6b72108d0
health: HEALTH_WARN
1 filesystem is degraded
We have been using a cephfs pool to store machine data to, the data is not
overly critical at this time but.
Its got to around 8TB and we started to see kernel panics with the hosts that
had the mounts in place.
Now when try to start the MDS's they cycle through, Active, Replay,
ClientReplay