Hi list,
A few days ago one of my OSDs failed and I dropped out that but afterwards I got
HEALTH_WARN until now. After turing off the OSD, the self-healing system started
to rebalance data between other OSDs.
My question is: At the end of rebalancing, the process doesn’t complete and I
get this message
at the end of “ceph -s” output:
recovery io 1456 KB/s, 0 object/s
how can I back to HEALTH_OK situation again?
My cluster details are:
- 27 OSDs
- 3 MONs
- 2048 pg/pgs
- Each OSD has 4 TB of space
- CentOS 7.2 with 3.10 linux kernel
- Ceph Hammer version
Thank you,
Roozbeh
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com