On 2/18/2015 3:24 PM, Florian Haas wrote:
On Wed, Feb 18, 2015 at 9:09 PM, Brian Rak <b...@gameservers.com> wrote:
What does your crushmap look like (ceph osd getcrushmap -o
/tmp/crushmap; crushtool -d /tmp/crushmap)? Does your placement logic
prevent Ceph from selecting an OSD for the third replica?

Cheers,
Florian

I have 5 hosts, and it's configured like this:
That's not the full crushmap, so I'm a bit reduced to guessing...
I wasn't sure the rest of it was useful. The full one can be found here: https://gist.githubusercontent.com/devicenull/db9a3fbaa0df2138071b/raw/4158a6205692eb5a2ba73831e7f51ececd8eb1a5/gistfile1.txt



root default {
         id -1           # do not change unnecessarily
         # weight 204.979
         alg straw
         hash 0  # rjenkins1
         item osd01 weight 12.670
         item osd02 weight 14.480
         item osd03 weight 14.480
         item osd04 weight 79.860
         item osd05 weight 83.490
Whence the large weight difference? Are osd04 and osd05 really that
much bigger in disk space?
Yes, osd04 and osd05 have 3-4x the number of disks as osd01-osd3

rule replicated_ruleset {
         ruleset 0
         type replicated
         min_size 1
         max_size 10
         step take default
         step chooseleaf firstn 0 type host
         step emit
}

This should not be preventing the assignment (AFAIK).  Currently the PG is
on osd01 and osd05.
Just checking, sure you're not running short on space (close to 90%
utilization) on one of your OSD filesystems?

No, they're all under 10% used. The cluster as a whole only has about 6TB used (out of 196 TB).
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to