Thanks Poul!

For reference to everyone finding this thread, this procedure works indeed as 
intended:

ceph osd getcrushmap -o crush.map
crushtool -d crush.map -o crush.txt
# edit crush rule: "step take ServerRoom class hdd" --> "step take ServerRoom 
class ssd"
crushtool -o crush-new.map -c crush.txt
ceph osd set norebalance
ceph osd set nobackfill
ceph osd setcrushmap -i crush-new.map
# wait for peering to finish, you will see 100% objects misplaced but all PGs 
active+...
ceph osd unset norebalance
ceph osd unset nobackfill

Ceph will now happily move objects while storage is fully redundant and r/w 
accessible.

Best regards,

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to