Hi
We have a lot of OSDs flapping during recovery and eventually they don't
come up again until kicked with "ceph orch daemon restart osd.x".
ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific
(stable)
6 hosts connected by 2 x 10 GB. Most data in EC 2+2 rbd pool.
"
# ce
Hi, please if someone know how to help, I have an HDD pool in mycluster and
after rebooting one server, my osds has started to crash.
This pool is a backup pool and have OSD as failure domain with an size of 2.
After rebooting one server, My osds started to crash, and the thing is only
getting w
I have 30 or so OSDs on a cluster with 240 that just keep crashing. Below is
the last part of one of the log files showing the crash, can anyone please help
me read this to figure out what is going on and how to correct it? When I start
the OSDs they generally seem to work for 5-30 minutes, and