On Centos 6.4, Ceph 0.61.7.
I had a ceph cluster of 9 osds. Today I destroyed all of the osds, and 
recreated 6 new ones.
Then I find all the old pgs are in stale.
[root@ceph0 ceph]# ceph -s
   health HEALTH_WARN 192 pgs stale; 192 pgs stuck inactive; 192 pgs stuck 
stale; 192 pgs stuck unclean
   monmap e1: 3 mons at 
{ceph0=172.18.11.60:6789/0,ceph1=172.18.11.61:6789/0,ceph2=172.18.11.62:6789/0},
 election epoch 24, quorum 0,1,2 ceph0,ceph1,ceph2
   osdmap e166: 6 osds: 6 up, 6 in
    pgmap v837: 192 pgs: 192 stale; 9526 bytes data, 221 MB used, 5586 GB / 
5586 GB avail
   mdsmap e114: 0/0/1 up



[root@ceph0 ~]# ceph health detail
...
pg 2.3 is stuck stale for 10249.230667, current state stale, last acting [5]
...
[root@ceph0 ~]# ceph pg 2.3 query
i don't have pgid 2.3



How can I get all the pgs back or recreated?


Thanks!
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to