Hello,

I'm still evaluating ceph - now a test cluster with the 0.67 dumpling.
I've created the setup with ceph-deploy from GIT.
I've recreated a bunch of OSDs, to give them another journal.
There already was some test data on these OSDs.
I've already recreated the missing PGs with "ceph pg force_create_pg"


HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean; 5 requests are blocked > 32 sec; mds cluster is degraded; 1 mons down, quorum 0,1,2 vvx-ceph-m-01,vvx-ceph-m-02,vvx-ceph-m-03

Any idea how to fix the cluster, besides completley rebuilding the cluster from scratch? What if such a thing happens in a production environment...

The pgs from "ceph pg dump" looks all like creating for some time now:

2.3d 0 0 0 0 0 0 0 creating 2013-08-16 13:43:08.186537 0'0 0:0 [] [] 0'0 0.0000000'0 0.000000

Is there a way to just dump the data, that was on the discarded OSDs?




Kind Regards,
Georg
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to