Just ran into this problem: a week ago I set up a Ceph cluster on 4
systems, with one admin node and 3 mon+osd nodes, then ran a few
casual IO tests. I returned to work after a few days out of town at
a conference, and now my Ceph cluster appears to have no OSDs!

root@rts24:/var/log/ceph# ceph status
    cluster 284dbfe0-e612-4732-9d26-2c5909f0fbd1
     health HEALTH_ERR 119 pgs degraded; 192 pgs stale; 192 pgs stuck
stale; 119 pgs stuck unclean; recovery 2/4 objects degraded (50.000%); no
osds
     monmap e1: 3 mons at {rts21=
172.29.0.21:6789/0,rts22=172.29.0.22:6789/0,rts23=172.29.0.23:6789/0},
election epoch 32, quorum 0,1,2 rts21,rts22,rts23
     osdmap e33: 0 osds: 0 up, 0 in
      pgmap v2774: 192 pgs, 3 pools, 135 bytes data, 2 objects
            0 kB used, 0 kB / 0 kB avail
            2/4 objects degraded (50.000%)
                  73 stale+active+clean
                 119 stale+active+degraded


I would appreciate if anyone could explain how can something like
this happen, or where to look for any evidence that might help me
understand what happened. The log files in /var/log/ceph/ show no
activity except for the monitors' Paxos chatter.

Thx,


*Dan Koren*Director of Software
*DATERA* | 650.210.7910 | @dateranews
d...@datera.io
------------------------------

This email and any attachments thereto may contain private,
confidential, and privileged material for the sole use of the
intended recipient. Any review, copying, or distribution of
this email (or any attachments thereto) by others is strictly
prohibited. If you are not the intended recipient, please
contact the sender immediately and permanently delete the
original and any copies of this email and any attachments
thereto.
------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to