Hi Matthew,

I would expect the osd_crush_location parameter to take effect from the OSD 
activation. Maybe ceph-ansible would have info there?
A work around might be “set noin”, restart all the OSDs once the ceph.conf 
includes the crush location and enjoy the automatic CRUSHmap update (if you 
have osd crush update on start = true).

Cheers,
Maxime

On 12/04/17 18:46, "ceph-users on behalf of Matthew Vernon" 
<ceph-users-boun...@lists.ceph.com on behalf of m...@sanger.ac.uk> wrote:

    Hi,
    
    Our current (jewel) CRUSH map has rack / host / osd (and the default
    replication rule does step chooseleaf firstn 0 type rack). We're shortly
    going to be adding some new hosts in new racks, and I'm wondering what
    the least-painful way of getting the new osds associated with the
    correct (new) rack will be.
    
    We deploy with ceph-ansible, which can add bits of the form
    [osd.104]
    osd crush location = root=default rack=1 host=sto-1-1
    
    to ceph.conf, but I think this doesn't help for new osds, since
    ceph-disk will activate them before ceph.conf is fully assembled (and
    trying to arrange it otherwise would be serious hassle).
    
    Would making a custom crush location hook be the way to go? then it'd
    say rack=4 host=sto-4-x and new osds would end up allocated to rack 4?
    And would I need to have done ceph osd crush add-bucket rack4 rack
    first, presumably?
    
    I am planning on adding osds to the cluster one box at a time, rather
    than going with the add-everything-at-crush-weight-0 route; if nothing
    else it seems easier to automate. And I'd rather avoid having to edit
    the crush map directly...
    
    Any pointers welcomed :)
    
    Regards,
    
    Matthew
    
    
    -- 
     The Wellcome Trust Sanger Institute is operated by Genome Research 
     Limited, a charity registered in England with number 1021457 and a 
     company registered in England with number 2742969, whose registered 
     office is 215 Euston Road, London, NW1 2BE. 
    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to