Thanks, 

The crushtool didn't help me further much unless I did something crazy as you 
said. 
So I have started by just creating a new and correct rule and just change the 
the pools one by one to use the new rule. 
This seems to work fine and as far as I can see it didn't impact any user 
(much). 



Maarten van Ingen 
| Systems Expert | Distributed Data Processing | SURFsara | Science Park 140 | 
1098 XG Amsterdam | 
| T +31 (0) 20 800 1300 | maarten.vanin...@surfsara.nl | https://surfsara.nl | 



We are ISO 27001 certified and meet the high requirements for information 
security. 


From: "Paul Emmerich" <paul.emmer...@croit.io> 
To: "Maarten van Ingen" <maarten.vanin...@surfsara.nl> 
Cc: "ceph-users" <ceph-users@ceph.io> 
Sent: Tuesday, 19 November, 2019 13:36:04 
Subject: Re: [ceph-users] How proceed to change a crush rule and remap pg's? 

I don't think that there's a feasible way to do this in a controlled manner. I 
would just change it and trust Ceph's remapping mechanism to work properly. 

You could use crushtool to calculate what the new mapping is and then do 
something crazy with upmaps (move them manually to the new locations one by one 
and then remove all upmaps and change the rule)... but that's quite annoying to 
do and probably doesn't really help. 

Paul 

-- 
Paul Emmerich 

Looking for help with your Ceph cluster? Contact us at [ https://croit.io/ | 
https://croit.io ] 

croit GmbH 
Freseniusstr. 31h 
81247 München 
[ http://www.croit.io/ | www.croit.io ] 
Tel: +49 89 1896585 90 


On Tue, Nov 19, 2019 at 11:11 AM Maarten van Ingen < [ 
mailto:maarten.vanin...@surfsara.nl | maarten.vanin...@surfsara.nl ] > wrote: 



Hi, 

I have a small but impacting error in my crush rules. 
For unknown reasons the rules are not using host but osd to place the data and 
thus we have some nodes with all three copies instead of three different nodes. 
We noticed this when rebooting a node and a pg became stale. 

My crush rule: 
{ 
"rule_id": 0, 
"rule_name": "replicated_rule", 
"ruleset": 0, 
"type": 1, 
"min_size": 1, 
"max_size": 10, 
"steps": [ 
{ 
"op": "take", 
"item": -2, 
"item_name": "default~hdd" 
}, 
{ 
"op": "chooseleaf_firstn", 
"num": 0, 
"type": "osd" 
}, 
{ 
"op": "emit" 
} 
] 
}, 


Type should be host of course. And I want to alter this and move pg's such that 
all is as should. 
How can I best proceed in correcting this issue? I do like to throttle the 
remapping of the data so ceph itself won't be unavailable while the data is 
redistributed. 

We are running on Mimic (13.2.6), and this environment has been installed 
freshly as Mimic while using ceph-ansible. 

Current ceph -s output: 



cluster: 

id: <<fsid> 

health: HEALTH_OK 



services: 

mon: 3 daemons, quorum mon01,mon02,mon03 

mgr: mon01(active), standbys: mon02, mon03 

mds: cephfs-2/2/2 up {0=mon03=up:active,1=mon01=up:active}, 1 up:standby 

osd: 502 osds: 502 up, 502 in 



data: 

pools: 18 pools, 8192 pgs 

objects: 28.74 M objects, 100 TiB 

usage: 331 TiB used, 2.3 PiB / 2.6 PiB avail 

pgs: 8192 active+clean 




Cheers, 

Maarten van Ingen 
| Systems Expert | Distributed Data Processing | SURFsara | Science Park 140 | 
1098 XG Amsterdam | 
| T +31 (0) 20 800 1300 | [ mailto:maarten.vanin...@surfsara.nl | 
maarten.vanin...@surfsara.nl ] | [ https://surfsara.nl/ | https://surfsara.nl ] 
| 



We are ISO 27001 certified and meet the high requirements for information 
security. 
_______________________________________________ 
ceph-users mailing list -- [ mailto:ceph-users@ceph.io | ceph-users@ceph.io ] 
To unsubscribe send an email to [ mailto:ceph-users-le...@ceph.io | 
ceph-users-le...@ceph.io ] 




Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to