Hi Joachim,

I'm currently looking for the general methodology and if it's possible without 
rebalancing everything.

But of course I'd also appreciate tips directly for my deployment; here is the 
info:

Ceph 18, Simple 3-replication (osd_pool_default_size = 3, default CRUSH rules 
Ceph creates for that).

Failure domains from `ceph osd tree`:

root default
    region FSN
        zone FSN1
            datacenter FSN1-DC1
                host machine-1
                    osd.0
                    ... 10 OSDs per datacenter
                ... currently 1 machine per datacenter
            datacenter FSN1-DC2
                host machine-2
                    ...
            ... currently 8 datacenters

I already tried simply

    ceph osd crush move machine-1 datacenter=FSN1-DC2

to "simulate" that DC1 and DC2 are temporarily the same failure domain 
(machine-1 is the only machine in DC1 currently), but that immediately causes 
33% of objects to be misplaced -- much more movement than I'd hope for and more 
than would be needed (I'd expect 12.5% would need to be moved given that 1 out 
of 8 DCs needs to be moved).

Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to