You can also just use a single command:

ceph osd crush add-bucket <hostname> host room=<the room>

> 在 2020年12月4日,00:00,Francois Legrand <f...@lpnhe.in2p3.fr> 写道:
> 
> Thank for your advices.
> 
> it was exactly what I needed.
> 
> Indeed, I did a :
> 
> ceph osd crush add-bucket <hostname> host
> ceph osd crush move <hostname> room=<the room>
> 
> 
> But also set the norecover, nobackfill and norebalance flags :-)
> 
> It worked perfectly as expected.
> 
> F.
> 
>> Le 03/12/2020 à 01:50, Reed Dier a écrit :
>> Just to piggyback on this, the below are the correct answers.
>> 
>> However, how I do it, which is admittedly not the best way, but it is the 
>> easy way.
>> I set the norecover, nobackfill flags
>> I run my osd creation script against the first disk on the new host to make 
>> sure that everything is working correctly, and also so that I can then 
>> manually move my new host bucket where I need it in the crush map with
>>> ceph osd crush move {bucket-name} {bucket-type}={bucket-name}
>> 
>> Then I proceed with my script for the rest of the OSDs on that host and know 
>> that they will fall into the correct crush location.
>> And then of course I unset the norecover, nobackfill flags so that data 
>> starts moving.
>> 
>> I only mention this because it ensures that you don't fat finger the 
>> hostname on manual bucket creation, or the hostname syntax doesn't match as 
>> expected, and it allows you to course correct after a single OSD added, 
>> rather than all N OSDs.
>> 
>> Hope thats also helpful.
>> 
>> Reed
>> 
>>>> On Dec 2, 2020, at 4:38 PM, Dan van der Ster <d...@vanderster.com 
>>>> <mailto:d...@vanderster.com>> wrote:
>>> 
>>> Hi Francois!
>>> 
>>> If I've understood your question, I think you have two options.
>>> 
>>> 1. You should be able to create an empty host then move it into a room
>>> before creating any osd:
>>> 
>>>   ceph osd crush add-bucket <hostname> host
>>>   ceph osd crush mv <hostname> room=<the room>
>>> 
>>> 2. Add a custom crush location to ceph.conf on the new server so that
>>> its osds are placed in the correct room/rack/host when they are first
>>> created, e.g.
>>> 
>>> [osd]
>>> crush location = room=0513-S-0034 rack=SJ04 host=cephdata20b-b7e4a773b6
>>> 
>>> Does that help?
>>> 
>>> Cheers, Dan
>>> 
>>> 
>>> 
>>> On Wed, Dec 2, 2020 at 11:29 PM Francois Legrand <f...@lpnhe.in2p3.fr 
>>> <mailto:f...@lpnhe.in2p3.fr>> wrote:
>>>> 
>>>> Hello,
>>>> 
>>>> I have a ceph nautilus cluster. The crushmap is organized with 2 rooms,
>>>> servers in these rooms and osd in these servers, I have a crush rule to
>>>> replicate data over the servers in different rooms.
>>>> 
>>>> Now, I want to add a new server in one of the rooms. My point is that I
>>>> would like to specify the room of this new server BEFORE creating osd in
>>>> this server (so the data added to the osd will be directly at the right
>>>> location). My problem is that it seems that servers appears in the
>>>> crushmap only when they have osds... and when you create a first osd,
>>>> the server is inserted in the crushmap under the default bucket (so not
>>>> in a room and then the first data stored in this osd will not be at the
>>>> correct location). I could move it after (if I do it rapidly, there will
>>>> be no that much data to move after), but I was wondering if there is a
>>>> way to either define the position of a server in the crushmap hierarchy
>>>> before creating osd or eventually to specify the room when creating the
>>>> first osd ?
>>>> 
>>>> F.
>>>> _______________________________________________
>>>> ceph-users mailing list -- ceph-users@ceph.io <mailto:ceph-users@ceph.io>
>>>> To unsubscribe send an email to ceph-users-le...@ceph.io 
>>>> <mailto:ceph-users-le...@ceph.io>
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@ceph.io <mailto:ceph-users@ceph.io>
>>> To unsubscribe send an email to ceph-users-le...@ceph.io 
>>> <mailto:ceph-users-le...@ceph.io>
>> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to