Hello!

I am new to ceph, please take that into account.

I'm experimenting with 3mons+2osds setup and got into situation when I
recreated both of osds.

My pools:
ceph> osd lspools
 0 data,1 metadata,

These are just the defaults, I deleted rbd pool, the other two I can't
delete it says that they are used by CephFS (no mds is running - why
it's used?)

Cluster status

ceph> status
cluster 8c3d2e5d-fce9-425b-8028-d2105a9cac3f
health HEALTH_WARN 128 pgs degraded; 128 pgs stale; 128 pgs stuck stale;
128 pgs stuck unclean; 2/2 in osds are down
monmap e2: 3 mons at
{mon0=10.1.0.7:6789/0,mon1=10.1.0.8:6789/0,mon2=10.1.0.11:6789/0},
election epoch 52, quorum 0,1,2 mon0,mon1,mon2
  osdmap e70: 2 osds: 0 up, 2 in
   pgmap v129: 128 pgs, 3 pools, 0 bytes data, 0 objects 2784 kB used,
36804 MB / 40956 MB avail 128 stale+active+degraded


Effectively there is no data for that PGs. I formatted it myself. How
can I tell ceph that there is no way to get that data back and it should
forget about that PGs and go on?

Also, how can I delete 'data' and 'metadata' pools or they are need for
some internal stuff (I won't use mds).

Thank you.

Regards,
Maxym.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to