Re: [ceph-users] Placement groups forever in "creating" state and dont map to OSD

2014-08-04 Thread Yogesh_Devi
Hi Kapil, The crush map is below # begin crush map # devices device 0 osd.0 device 1 osd.1 # types type 0 osd type 1 host type 2 rack type 3 row type 4 room type 5 datacenter type 6 root # buckets root default { id -1 # do not change unnecessarily # weight 1.000 a

Re: [ceph-users] Placement groups forever in "creating" state and dont map to OSD

2014-08-04 Thread Yogesh_Devi
Dell - Internal Use - Confidential Hi Kapil Thanks for responding :) My Mon-server two OSD's are running on three separate servers one for respective node. All are SLES sp3. Below is "ceph osd tree" output from my mon server box slesceph1: # ceph osd tree # idweight type name up/down

Re: [ceph-users] Placement groups forever in "creating" state and dont map to OSD

2014-08-04 Thread Yogesh_Devi
Dell - Internal Use - Confidential Matt I am using Suse Enterprise Linux 11 - SP3 ( SLES SP3) I don't think I have enabled SE Linux .. Yogesh Devi, Architect, Dell Cloud Clinical Archive Dell Land Phone +91 80 28413000 Extension - 2781 Hand Phone+91 99014 71082 From: Matt Harlum [mail

Re: [ceph-users] Placement groups forever in "creating" state and dont map to OSD

2014-08-04 Thread Yogesh_Devi
Dell - Internal Use - Confidential Matt Thanks for responding As suggested I tried to set replication to 2X by usng commands you provided $ceph osd pool set data size 2 $ceph osd pool set data min_size 2 $ceph osd pool set rbd size 2 $ceph osd pool set rbd min_size 2 $ceph osd pool set metadata si

[ceph-users] Placement groups forever in "creating" state and dont map to OSD

2014-08-01 Thread Yogesh_Devi
Dell - Internal Use - Confidential Hello Ceph Experts :) , I am using ceph ( ceph version 0.56.6) on Suse linux. I created a simple cluster with one monitor server and two OSDs . The conf file is attached When start my cluster - and do "ceph -s" - I see following message $ceph -s" health HEALT