Dell - Internal Use - Confidential
Hello Ceph Experts :) ,

I am using ceph ( ceph version 0.56.6) on Suse linux.
I created a simple cluster with one monitor server and two OSDs .
The conf file is attached

When  start my cluster - and do "ceph -s" -  I see following message

$ceph -s"
health HEALTH_WARN 202 pgs stuck inactive; 202 pgs stuck unclean
   monmap e1: 1 mons at {slesceph1=160.110.73.200:6789/0}, election epoch 1, 
quorum 0 slesceph1
   osdmap e56: 2 osds: 2 up, 2 in
    pgmap v100: 202 pgs: 202 creating; 0 bytes data, 10305 MB used, 71574 MB / 
81880 MB avail
   mdsmap e1: 0/0/1 up


Basically there is some problem with my placement groups - they are forever 
stuck in "creating" state and there is no OSD associated with them ( despite 
having two OSD's that are up and in" ) - when I do a ceph pg stat" I see as 
follows

$ceph pg stat
v100: 202 pgs: 202 creating; 0 bytes data, 10305 MB used, 71574 MB / 81880 MB 
avail


if I query any individual pg - then I see it isn't mapped to any OSD
$ ceph pg 0.d query
pgid currently maps to no osd

I tried restaring OSDs and tuning my configuration without any avail

Any suggestions ?

Yogesh Devi

Attachment: ceph.conf
Description: ceph.conf

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to