ceph version 12.2.13 luminous (stable)

My whole ceph cluster went to kind of read only state. Ceph status showed that 
client reads is 0 op/s for whole cluster. There was normal amount of writes 
going on.

I checked health and it said:

# ceph health detail
HEALTH_WARN Reduced data availability: 1 pg inactive, 1 pg peering
PG_AVAILABILITY Reduced data availability: 1 pg inactive, 1 pg peering
pg 26.13b is stuck peering for 25523.506788, current state peering, last acting 
[2,0,33]

All osds showed to be up and all monitors are good. All pools are 3/2 
(size/min) and space usage ~30%.

I fixed this by restarting forst osd.2 (nothing happened) and then restarted 
osd.0. After that everyting went back to normal.

So what can cause "stuck peering" and how can i prevent this event from 
happening again?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to