Hi there
I'm currently evaluating ceph and started filling my cluster for the
first time. After filling it up to about 75%, it reported some OSDs
being "near-full".
After some evaluation I found that the PGs are not distributed evenly
over all the osds.
My Setup:
* Two Hosts with 45 Disks ea
Sorry for replying only now, I did not get to try it earlier…
On Thu, 19 Sep 2013 08:43:11 -0500, Mark Nelson wrote:
On 09/19/2013 08:36 AM, Niklas Goerke wrote:
[…]
My Setup:
* Two Hosts with 45 Disks each --> 90 OSDs
* Only one newly created pool with 4500 PGs and a Replica Size o
Hi guys
This is probably a configuration error, but I just can't find it.
The following reproduceable happens on my cluster [1].
15:52:15 On Host1 one disk is being removed on the RAID Controller (to
ceph it looks as if the disk died)
15:52:52 OSD Reported missing (osd.47)
15:52:53 osdmap eXXX