On 29.10.2014 18:29, Thomas Alrin wrote:
> Hi all,
> I'm new to ceph. What is wrong in this ceph? How can i make status to
> change HEALTH_OK? Please help
With the current default pool size of 3 and the default crush rule you
need at least 3 OSDs on separate nodes for a new ceph cluster to st
Hi all,
I'm new to ceph. What is wrong in this ceph? How can i make status to
change HEALTH_OK? Please help
$ceph status
cluster 62e2f40c-401b-4b3e-804a-cebbec1016c5
health HEALTH_WARN 104 pgs degraded; 88 pgs incomplete; 88 pgs stuck
inactive; 192 pgs stuck unclean
monmap e1: