That is the correct modification to change the failure domain from osd to host. 
 You can make the change to host from osd in your crush map any time after you 
add the 2 new storage nodes (It is important to have enough hosts to at least 
match your cluster's replica size before changing the crush map to host).

Since both actions will cause backfilling and data movement, if you do them 
close to the same time then you only really move data once.  I would probably 
set nobackfill and norecover until after all of the hosts are added and then 
the crush map is updated.

________________________________

[cid:image857e19.JPG@ab181355.4b90ae6c]<https://storagecraft.com>       David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943

________________________________

If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.

________________________________

________________________________
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Mike 
Jacobacci [mi...@flowjo.com]
Sent: Wednesday, October 05, 2016 11:37 AM
To: ceph-us...@ceph.com
Subject: [ceph-users] Adding OSD Nodes and Changing Crushmap

Hi,

I just wanted to get a sanity check if possible, I apologize if my questions 
are stupid, I am still new to Ceph and I am feeling uneasy adding new nodes.

 Right now we have one OSD node with 10 OSD disks (plus 2 disks for caching) 
and this week we are going to add two more nodes with the same hardware.

I want to change the replication from OSD to Host, do I just need to change the 
crushmap to the following?

 OLD:
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 0 type osd
step emit
}

NEW:
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

My last question:  After adding the new nodes/disks to the cluster, I assume 
re-balancing will start as soon as they are added... Do I need to wait for the 
data to rebalance before changing the crushmap to replicate across hosts?
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to