[ceph-users] PG distribution scattered

2013-09-19 Thread Niklas Goerke
Hi there I'm currently evaluating ceph and started filling my cluster for the first time. After filling it up to about 75%, it reported some OSDs being "near-full". After some evaluation I found that the PGs are not distributed evenly over all the osds. My Setup: * Two Hosts with 45 Disks ea

Re: [ceph-users] PG distribution scattered

2013-09-27 Thread Niklas Goerke
Sorry for replying only now, I did not get to try it earlier… On Thu, 19 Sep 2013 08:43:11 -0500, Mark Nelson wrote: On 09/19/2013 08:36 AM, Niklas Goerke wrote: […] My Setup: * Two Hosts with 45 Disks each --> 90 OSDs * Only one newly created pool with 4500 PGs and a Replica Size o

[ceph-users] Not recovering completely on OSD failure

2013-11-08 Thread Niklas Goerke
Hi guys This is probably a configuration error, but I just can't find it. The following reproduceable happens on my cluster [1]. 15:52:15 On Host1 one disk is being removed on the RAID Controller (to ceph it looks as if the disk died) 15:52:52 OSD Reported missing (osd.47) 15:52:53 osdmap eXXX