Hi,

As we expand our cluster (adding nodes), we'd like to take advantage of
better EC profiles enabled by higher server/rack counts. I understand, as
Ceph currently exists (15.2.4), there is no way to live-migrate from one EC
profile to another on an existing pool, for example, from 4+2 to 17+3 when
going from 7 nodes to 21. Is this correct?

How are people accomplishing migrations such as this one (or 4+2 to 9+3,
for example) with minimal disruption to services that utilize RBDs sitting
on top of these pools? I found:
https://ceph.io/geen-categorie/ceph-pool-migration/ , which requires
effectively shutting down access during migration (which is doable, but not
ideal) - and from what I've read - has some potential downsides
(specifically referencing the cppool method).

The pools are all EC for data, and an rbd pool exists for metadata.

Thank you!
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to