I’m trying to set up an erasure coded pool with k=9 m=6 on 13 osd hosts. I’m 
trying to write a crush rule for this which will balance this between hosts as 
much as possible. I understand that having 9+6=15 > 13, I will need to parse 
the tree twice in order to find enough pgs. So what I’m trying to do is select 
~1 from each host on the first pass, and then select n more osds to fill it 
out, without using any osds from the first pass, and preferably balancing them 
between racks. 

For starters, I don't know if this is even possible or if its the right 
approach to what I'm trying to do, but heres my attempt:

rule .us-phx.rgw.buckets.ec {
        ruleset 1
        type erasure
        min_size 3
        max_size 20
        step set_chooseleaf_tries 5
        step take default
        step chooseleaf indep 0 type host
        step emit
        step take default
        step chooseleaf indep 0 type rack
        step emit
}

This gets me pretty close, the first pass works great and the second pass does 
a nice balance between racks, but in my testing ~ 6 out of 1000 pgs will have 
two osds in their group. I'm guessing I need to get down to one pass to make 
sure that doesn't happen, but I'm having a hard time sorting out how to hit the 
requirement of balancing among hosts *and* allowing for more than one osd per 
host. 

Thanks, Aaron 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to