Hello to all,

I've a problem with my Ceph Cluster.

Some info to begin with :

- Ceph Bobtail : ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca)
- CentOS 6.4

Here is the output of ceph osd tree :
http://pastebin.com/C5TM7Jww

I already have sevreal pools :

[root@ceph-admin ~]# ceph osd lspools
0 data,1 metadata,2 rbd,4 vm,6 test

Now, i want to create a new pool :

[root@ceph-admin ~]# ceph osd pool create os-images 200 200

The pool is well created, but, the cluster goes wron in HEALTH_WARN
with 2 active+degraded PGs.

When a I do a
ceph pg dump | grep degraded

the result is

13.af 0 0 0 0 0 0 0 active+degraded 2013-05-15 14:45:17.500330 0'0
5156'9 [3] [3] 0'0 0.000000 0'0 0.000000
13.1a 0 0 0 0 0 0 0 active+degraded 2013-05-15 14:45:18.013525 0'0
5156'9 [3] [3] 0'0 0.000000 0'0 0.000000


Do you have an idea ?
Thanks a lot.

Alexis
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to