2014-04-28 17:17 GMT+02:00 Kurt Bauer <kurt.ba...@univie.ac.at>: > What do you mean by "I see all OSDs down"?
I mean that my OSDs are detected as down: $ sudo ceph osd tree # id weight type name up/down reweight -1 12.74 root default -2 3.64 host osd13 0 1.82 osd.0 down 0 2 1.82 osd.2 down 0 -3 5.46 host osd12 1 1.82 osd.1 up 1 3 1.82 osd.3 down 0 4 1.82 osd.4 down 0 -4 3.64 host osd14 5 1.82 osd.5 down 0 6 1.82 osd.6 up 1 > What does a 'ceph osd stat' say? osdmap e1640: 7 osds: 2 up, 2 in > How can I detect what ceph is doing? > > 'ceph -w' Ok, but there I can't see something like "recovering, 57% complete" or something similiar. > What's the output of 'ceph -s' $ sudo ceph -s cluster 6b9916f9-c209-4f53-98c6-581adcdf0955 health HEALTH_WARN 3383 pgs degraded; 59223 pgs down; 12986 pgs incomplete; 81691 pgs peering; 25071 pgs stale; 95049 pgs stuck inactive; 25071 pgs stuck stale; 98432 pgs stuck unclean; 16 requests are blocked > 32 sec; recovery 1/189 objects degraded (0.529%) monmap e3: 3 mons at {osd12=192.168.0.112:6789/0,osd13=192.168.0.113:6789/0,osd14=192.168.0.114:6789/0}, election epoch 326, quorum 0,1,2 osd12,osd13,osd14 osdmap e1640: 7 osds: 2 up, 2 in pgmap v1046855: 98432 pgs, 14 pools, 65979 bytes data, 63 objects 969 MB used, 3721 GB / 3722 GB avail 1/189 objects degraded (0.529%) 24 stale 12396 peering 348 remapped 44014 down+peering 3214 active+degraded 3949 stale+peering 11613 stale+down+peering 24 stale+active+degraded 145 active+replay+degraded 6123 remapped+peering 3159 down+remapped+peering 3962 incomplete 437 stale+down+remapped+peering 9024 stale+incomplete _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com