I hit a very similar issue with my Ceph cluster that I assumed was caused
by faulty RAM, but the timing also matches the timeframe of my upgrading
from 17.2.5 to 17.2.6. I took the same recovery steps, stopping at online
deep scrub, and everything appears to be working fine now.
On Wed, Jun 14, 20
Hello,
I have a Ceph deployment using CephFS. Recently MDS failed and cannot
start. Attempting to start MDS for this filesystem results in nearly
immediate segfault in MDS. Logs below.
cephfs-journal-tool shows Overall journal integrity state OK
root@proxmox-2:/var/log/ceph# cephfs-journal-too