Hi.

Over the past few days, we've been working on migrating data from machine A to machine B using the pgremapper tool, but we haven’t been able to achieve the expected results.

As part of our testing, we set up a small Ceph cluster with several monitors, managers, and servers with OSDs.We applied the flags noout, nobackfill, norecovery, and norebalance, and then added additional servers with OSDs. While Ceph did allocate PG replicas to the newly added OSDs, the actual data didn’t move due to the active flags. We then attempted to use pgremapper to migrate all PGs from one server to the new one, removing or negating the flags in the process. However, we frequently failed to complete the migration of all data/PGs. Are we overlooking something? Does anyone have a reliable, step-by-step procedure we can follow to perform this correctly?

Any help would be greatly appreciated!

Michal


On 3/19/25 08:13, Janne Johansson wrote:
The safest approach would be to use the upmap-remapped.py tool developed by Dan 
at CERN. See [1] for details.

The idea is to leverage the upmap load balancer to progressively migrate the 
data to the new servers, minimizing performance impact on the cluster and 
clients. I like to create the OSDs ahead of time on the nodes that I initially 
place in a root directory called ‘closet’.

I then apply the norebalance flag (ceph osd set norebalance), disable the 
balancer (ceph balancer off), move the new nodes with already provisioned OSDs 
to their final location (rack), run ./upmap-remapped.py to bring all PGs back 
to active+clean state, remove the norebalance flag (ceph osd unset 
norebalance), re-enable the balancer (ceph balancer on) and watch data moving 
progressively as the upmap balancer executes its plans.

We do exactly that also, sometimes using pgremapper instead of
upmap-remapper.py, but the effect is the same. Make the changes with
norebalance, upmap the PGs to be happy where they are until we unset
norebalance and let the ceph balancer correct it X% at a time.


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to