Hi Ceph Users !
I've got here a CEPH cluster: 6 nodes, 12 OSDs on HDD and SSD disks. All
journal OSDs on SSDs. 25 various HDDs in total.
We had several HDD failures in past, but every time - it was HDD failure
and it was never journal related. After replacing HDD, and recovery
procedures all was
Hi,
Recently I lost 5 out of 12 journal OSDs (2xSDD failure at one time).
size=2, min_size=1. I know, should rather be 3/2, I have plans to switch to
it asap.
CEPH started to throw many failures, then I removed these two SSDs, and
recreated journal OSD from scratch. In my case, all data on main O